Signal Hub logoSignal Hub

Rust news and articles

Rust
Updated just now
Date
Source

12 articles

Dev.to (Rust/Go)
~2 min readMay 6, 2026

I Built My Own Programming Language at 19 – Introducing Akro

Hi DEV community! I'm Ankit Bishnoi, 19 years old from India, and I just Akro, a programming language I was frustrated with existing languages: Python is simple but slow Go is fast but verbose JavaScript is everywhere but messy So I spent months building Akro to combine the best of all three. akro fn main { name := "World" say "Hello, {name}!" nums := [1, 2, 3, 4, 5] total := reduce(nums, fn(a, b) { return a + b }, 0) say "Sum = {total}" for i in 0..5 { say "i = {i}" } } Key Features Type inference — like Rust/Go x := 10 name := "Akro" pi := 3.14 String interpolation — supports full expressions say "Hello, {name}!" say "1 + 1 = {1 + 1}" say "Sum = {sum(nums)}" Pattern matching match score { case n if n >= 90 => say "A grade" case n if n >= 80 => say "B grade" default => say "F grade" } Transpiles to JavaScript — write Akro, run in browser akro transpile app.ak # → app.js (ready for browser) Error handling try { result := divide(10, 0) } catch(e) { say "Error: {e}" } Structs with methods struct Point { x: int y: int } impl Point { fn distance(self) -> float { return sqrt(self.x ** 2 + self.y ** 2) } } p := Point(3, 4) say p.distance() // 5.0 How it's built The interpreter is written in Go: Hand-written lexer Recursive descent parser Tree-walking interpreter JS transpiler (AST → JavaScript) 40+ built-in functions Install npm install -g akro-lang akro run hello.ak # Run a file akro repl # Interactive shell akro transpile app.ak # Convert to JavaScript akro version # Show version Links GitHub: https://github.com/ankitkhileryy/akro npm: https://www.npmjs.com/package/akro-lang This is my first major open source project. I'd love honest feedback — what's missing, what could be better, and what would make you actually use a new language. Thanks for reading! — Ankit Bishnoi, 19, India

Dev.to (Rust/Go)
~7 min readMay 6, 2026

Python vs Go vs Rust for AI Agents in 2026: A Pragmatic Field Guide

The AI agent ecosystem has a language problem that nobody talks about directly: the tutorials and frameworks are all Python, but production agent systems increasingly lean on Go and Rust for the infrastructure layer. A GDE just published "Stop Using Python for Your Gen AI Apps, Use Go" using Google's Genkit. Meanwhile, Rust frameworks like echo-agent, rustic_ai, and Aura ship with features that LangChain users would recognize instantly. And Python's LangGraph and CrewAI still dominate the orchestration space. The truth is more nuanced than any single-language take. Each language has a distinct role, and the best production systems use at least two of them together. This guide helps you decide where each one fits, with real project examples and code snippets so you can evaluate the tradeoffs yourself. Aspect Python Go Rust Ecosystem maturity LangChain, CrewAI, AutoGen, LlamaIndex - 4+ years of agent frameworks Genkit, Eino, Phero - emerging (2025-2026) echo-agent, rustic_ai, Aura, cinch-rs - very early but feature-rich Binary size 80-150 MB (with runtime) ~18 MB (static binary) ~5 MB (static binary) Memory idle 80-150 MB (FastAPI) 10-20 MB 5-10 MB Cold start 200-500ms (import time) <10ms <5ms Concurrency asyncio (cooperative, single-threaded default) Goroutines (2KB stacks, N:M scheduling) async tasks (zero-cost, no runtime overhead) Type safety Optional (gradual with mypy/pydantic) Structural (compile-time) Nominal (compile-time, zero-cost abstractions) Tool calling Decorators + pydantic Reflection + struct tags Proc macros + derive macros Dependency count 20-50 indirect deps 0-5 direct deps 80-200+ crate deps Prototyping speed Fastest Medium Slowest Production reliability Medium (crash at runtime) High (no runtime surprises) Highest (no undefined behavior) Best for ML pipelines, RAG, fast prototyping API serving, proxies, governance MCP servers, sandboxed execution, high-throughput agents Python's dominance in AI isn't accidental. The model training, fine-tuning, and data science ecosystem is irreplaceable. RAG pipelines. If you're building a retrieval-augmented generation system with embeddings, chunking strategies, and reranking, Python has every library you need: sentence-transformers, chromadb, llama-index, unstructured. None of the Go or Rust equivalents come close. Prototyping. Python lets you sketch an agent idea in 20 lines and iterate. The REPL-driven workflow is unmatched for exploring prompt strategies and tool call patterns. from langchain.agents import create_react_agent, AgentExecutor from langchain.tools import tool @tool def get_weather(city: str) -> str: """Get weather for a city.""" return f"Sunny, 72F in {city}" agent = create_react_agent(llm, [get_weather], prompt) executor = AgentExecutor(agent=agent, tools=[get_weather]) result = executor.invoke({"input": "What's the weather in Tokyo?"}) print(result["output"]) Frameworks that exist. LangGraph's state machine approach, CrewAI's role-based agents, AutoGen's multi-agent conversations -- these are proven patterns with thousands of production deployments. The Go/Rust equivalents are 1-2 years behind in maturity. But here's the catch: Python's production footprint is expensive. A simple FastAPI agent server idles at 80-150 MB of RAM. The cold start on container orchestration is 200-500ms before a single line of business logic runs. For a prototype, none of this matters. For a production system serving thousands of agent sessions, it adds up to real infrastructure cost. The case for Go in agent infrastructure isn't "Go is better than Python." It's that agents are not monoliths. They have layers: the reasoning layer (LLM), the orchestration layer (framework), and the infrastructure layer (transport, policy, memory, tracing). Python dominates the first two. The third layer is systems programming. API serving. Go can handle hundreds of concurrent agent sessions with streaming responses while using 30-60 MB of RAM. An 18 MB Docker image deploys in under a second. Governance proxies. When every tool call from an agent needs to pass through rate limiting, approval workflows, and audit logging, Go's goroutine-per-request model makes this trivial. type AgentProxy struct { policyEngine *PolicyEngine traceExporter *otlp.Exporter rateLimiter *RateLimiter mcpClients map[string]*mcp.Client } func (p *AgentProxy) HandleToolCall(ctx context.Context, req *ToolCall) error { if err := p.rateLimiter.Check(ctx, req.UserID); err != nil { return err } decision, err := p.policyEngine.Evaluate(ctx, req) if err != nil || !decision.Allowed { return err } return p.mcpClients[req.Server].Call(ctx, req) } MCP servers. The Model Context Protocol is fundamentally a concurrency problem: managing multiple stdio subprocesses, each with its own stdin/stdout pair, plus incoming requests from multiple agents. Go channels and goroutines handle this pattern naturally. Google Genkit Go 1.0. Google just shipped Genkit Go as a production-ready framework. It gives Go developers a structured way to build Gen AI apps with streaming, evaluation, and tracing built in. This is the biggest single boost to the Go AI ecosystem in 2026. Rust agent frameworks are younger but ambitious. Projects like echo-agent, rustic_ai, and Aura from Mezmo ship production-grade features that Go and Python ecosystems are still building toward. Sandboxed execution. Rust's WASM support means you can run untrusted agent skills in a sandbox with memory limits, execution timeouts, and no filesystem access. CrossKlaw does exactly this. A2A protocol. echo-agent ships a full Agent-to-Agent protocol implementation, letting agents discover each other, hand off tasks, and collaborate across frameworks. This is the same pattern Google proposed with A2A, but native in Rust. use echo_agent::prelude::*; #[tool(name = "search", description = "Search the web")] async fn search(query: String) -> Result<ToolResult> { Ok(ToolResult::success(format!("Results for: {query}"))) } #[tokio::main] async fn main() -> Result<()> { let mut agent = agent! { model: "qwen3-max", system_prompt: "You are a research assistant", tools: [SearchTool], }?; let answer = agent.execute("What's new in AI this week?").await?; println!("{answer}"); Ok(()) } Where Rust hurts. The ecosystem is fragmented across competing frameworks. Dependency graphs balloon to 150+ crates. Prototyping is slow -- you pay the type system tax upfront. And the pool of developers who know both Rust and AI tooling is tiny. All three languages share a common gap: once you're running agents in production, you need a layer that handles scheduling, execution environments, monitoring, and multi-agent coordination -- without writing it yourself. This is where platforms like Nebula come in. Nebula gives you the orchestration runtime so your agents can be written in whatever language makes sense for their job -- Python for RAG, Go for the API proxy, Rust for the sandboxed executor -- while the platform handles deployment, secrets, triggers, and cross-agent communication. You don't have to choose one language. You choose the right language for each component, and the orchestration layer ties them together. Use Python when: You're prototyping or iterating fast The task centers on ML inference, embedding, or RAG You need the largest possible community and ecosystem You're fine with 80-150 MB per service instance Use Go when: You're building the serving/infrastructure layer Cold start time and memory budget matter (containers, serverless) You need a governance proxy, policy engine, or MCP bridge You want a single binary deploy with zero runtime dependencies Use Rust when: You need WASM sandboxing for untrusted code Memory safety is a hard requirement (security-critical agent paths) You want compile-time guarantees on tool input/output schemas You're willing to accept slower iteration for maximum production reliability The "Python vs Go" framing is a false choice. Production AI agent systems in 2026 look like this: a Python RAG pipeline feeds context into a Go API server that enforces governance and routes tool calls, and a Rust sandbox runs untrusted code in WASM. Each component uses the language best suited to its job. The frameworks are catching up faster than most people realize. Google Genkit Go 1.0, echo-agent's feature parity with LangGraph, and Aura's production-ready MCP runtime all landed in the last six months. Choose your stack by the layer, not by the hype. This article is part of the "Developer Tool Showdowns" series -- practical comparisons to help you make informed engineering decisions.

Rust Blog
~4 min readMay 4, 2026

Rust is participating in Outreachy

The Rust Project has been building up a good history of participating in various open-source mentorship programs, including Google Summer of Code for three years (including this year) and previously OSPP. We're happy to announce that this year we are also participating in Outreachy starting in the May 2026 cohort. Each of these mentorship programs has different criteria for eligibility depending on who they target and the motivations of the program. Outreachy provides internships in open source, to people from any background who face underrepresentation, systemic bias, or discrimination in the technical industry where they are living. You can learn more about the Outreachy program on their website. What is Outreachy and how is it different than Google Summer of Code Outreachy is similar to Google Summer of Code (GSoC) in some aspects, but different in others. First off, unlike GSoC, Outreachy interns first apply to the overall program and only then can apply to specific communities. Second, while oftentimes GSoC applicants submit various contributions prior to their application, Outreachy has a dedicated period where contributions are not just optional, but required. Finally, Outreachy applicants submit an application similar to GSoC applications and communities pick interns based on those applications and the interns' contributions. Outreachy has two internship periods per year, one running from May to August (in which we are currently participating) and one from December to March. The other major difference between Google Summer of Code and Outreachy is the source of intern stipends. For GSoC, Google graciously covers contributor stipends and overhead. For Outreachy, communities instead cover the interns' stipends and overhead. We are mentoring 4 interns for the May 2026 cohort Because of limited funding availability and mentoring capacity, the Rust Project decided to select four interns for mentorship. We'll briefly share these projects below. Calling overloaded C++ functions from Rust Ajay Singh has been selected, mentored by teor, Taylor Cramer, and Ethan Smith. This project aims to implement an experimental feature for calling overloaded C++ functions from Rust, and to begin testing that feature in a few representative use cases. Code coverage of the Rust compiler at scale Akintewe Oluwasola has been selected, mentored by Jack Huey. This project aims to develop the workflows to run and analyze code coverage of the compiler at the scale of the entire compiler test suite and on ecosystem crates detected by crater. The hope is to be able to detect when the compiler is inadequately tested, both within the compiler and in the ecosystem, and to build tools to do continuous analysis on this. Fuzzing the a-mir-formality type system implementation Tunde-Ajayi Olamiposi has been selected, mentored by Niko Matsakis, Rémy Rakic, and tiif. This project aims to implement fuzzing for a-mir-formality, an in-progress model for Rust's type and trait system. The goal is to generate programs in order to identify rules with underspecified semantics in a-mir-formality. Improve the security of GitHub Actions of the Rust Project oghenerukevwe Sandra Idjighere has been selected, mentored by Marco Ieni and Ubiratan Soares. This project aims to improve the security of GitHub Actions workflows of the repositories owned by the Rust Project. It will develop tools and workflows, integrating with existing software, to analyze Github repositories and detect if they follow the best security practices, fix existing issues, and ensure that good security practices are followed in the future. What's next Over the next 3 months, the interns will work closely with their mentors to make progress on their projects. When the internship period is over, we'll write another blog post to share the results! See you then! We also want to thank all the people that submitted applications and made contributions. It was quite tough to decide which applicants to select. Hopefully we will participate in Outreachy again in the future and there are other opportunities to participate. We also very much welcome you to stick around and continue being involved - there is a ton of places in the Rust Project with opportunities to be involved.

Rust Blog
~3 min readMay 1, 2026

Raising the baseline for the `nvptx64-nvidia-cuda` target

The nvptx64-nvidia-cuda target is a compilation target for NVIDIA GPUs. When using this target, the final output is PTX. Two version choices shape that output: a GPU architecture (for example, sm_70, sm_80, …), which determines which GPUs can run the PTX, and a PTX ISA version, which determines which CUDA driver versions can load (and JIT-compile) the PTX. In Rust 1.97 (scheduled for release on July 9, 2026), the baseline PTX ISA version and GPU architecture for nvptx64-nvidia-cuda will be increased. These changes affect both the Rust compiler (rustc) and related host tooling, and they make it impossible to generate PTX artifacts compatible with older GPUs and older CUDA drivers. The new minimum supported versions will be: PTX ISA 7.0 (requires a CUDA 11 driver or newer) SM 7.0 (GPUs with compute capability below 7.0 are no longer supported) Why are the requirements being changed? Until now, Rust has supported emitting PTX for a wide range of GPU architectures and PTX ISA versions. In practice, several defects existed that could cause valid Rust code to trigger compiler crashes or miscompilations. Raising the baseline addresses these issues and enables more complete support for the remaining supported hardware. Removing support affects users of the architectures being removed. In this case, the most recent affected GPU architectures date back to 2017 and are no longer actively supported by NVIDIA. We therefore expect the overall impact of this change to be limited. Maintaining support for these architectures would require substantial effort. These removals let us focus development efforts on improving correctness and performance for currently supported hardware. What happens when I update to Rust 1.97? If you need to target a CUDA driver that does not support PTX ISA 7.0 (CUDA 10-era drivers and older), Rust 1.97 will no longer be able to generate PTX compatible with that environment. Similarly, if you need to run on GPUs with compute capability below 7.0 (for example, Maxwell or Pascal), Rust 1.97 will no longer be able to generate compatible PTX for those GPUs. Assuming you are targeting a CUDA driver compatible with CUDA 11 or newer and using GPUs with compute capability 7.0 or newer: If you do not specify -C target-cpu, the new default will be sm_70, and your build should continue to work (but will no longer be compatible with pre-Volta GPUs). If you currently specify an older -C target-cpu (for example, sm_60), you will need to either: remove that flag and let it default to sm_70, or update it to sm_70 or a newer architecture. If you already specify -C target-cpu=sm_70 (or newer), there should be no behavioral changes from this update. For more details on building and configuring nvptx64-nvidia-cuda, see the platform support documentation.

Rust Blog
~5 min readApr 30, 2026

Announcing Google Summer of Code 2026 selected projects

As previously announced, the Rust Project is participating in Google Summer of Code (GSoC) 2026. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open source. A few months ago, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories before GSoC officially started! The applicants prepared and submitted their project proposals by the end of March. This year, we received 96 proposals, which is a 50% increase from last year. We are glad that there was again a lot of interest in our projects! Like many other GSoC organizations this year, we somewhat struggled with some AI-generated proposals and low-quality contributions generated using AI agents, but it stayed manageable. GSoC requires us to produce an ordered list of the best proposals, which is always challenging, as Rust is a big project with many priorities. Our mentors examined the submitted proposals and evaluated them based on their prior interactions with the given applicant, their contributions so far, the quality of the proposal itself, but also the importance of the proposed project for the Rust Project and its wider community. We also had to take mentor bandwidth and availability into account. Unfortunately, we had to cancel some projects due to several mentors losing their funding for Rust work in the past few weeks. As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between proposals targeting different work to avoid overloading a single mentor with multiple projects. In the end, we narrowed the list down to the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC. Selected projects On the 30th of April, Google has announced the accepted projects. We are happy to share that 13 Rust Project proposals were accepted by Google for Google Summer of Code 2026. That is a lot of projects! We are really happy and excited about GSoC 2026! Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s): A Frontend for Safe GPU Offloading in Rust by Marcelo Domínguez, mentored by Manuel Drehwald Adding WebAssembly Linking Support to Wild by Kei Akiyama, mentored by David Lattimore Bringing autodiff and offload into Rust CI by Shota Sugano, mentored by Manuel Drehwald Debugger for Miri by Mohamed Ali Mohamed, mentored by Oli Scherer Implementing impl and mut restrictions by Ryosuke Yamano, mentored by Jacob Pratt and Urgau Improving Ergonomics and Safety of serialport-rs by Tanmay, mentored by Christian Meusel libc: transition differing bit-width time and offset variants and deprecate bug-prone constants by Adam Martinez, mentored by Trevor Gross Link Linux kernel and its Modules with Wild by Vishruth Thimmaiah, mentored by David Lattimore Migrating rust-analyzer assists to SyntaxEditor by Shourya Sharma, mentored by Chayim Refael Friedman and Lukas Wirth Port std::arch test suite to rust-lang/rust by Sumit Kumar, mentored by Jakub Beránek and Folkert de Vries Reorganizing tests/ui/issues by zedddie, mentored by Teapot and Kivooeo Utilize debugger APIs to improve debug info test accuracy and error reporting by Anthony Bolden, mentored by Jakub Beránek and Jieyou Xu XDG path support for rustup by Guicheng Liu, mentored by rami3l Congratulations to all applicants whose project was selected! Our mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects. We are excited to mentor three contributors who already experienced GSoC with us in the previous year. Welcome back, Kei, Marcelo and Shourya! We would like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still current and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project and the Rust ecosystem. Some of the Rust Project Goals are also looking for help. There is a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future! The accepted GSoC projects will run for several months. After GSoC 2026 finishes (in autumn of 2026), we will publish a blog post in which we will summarize the outcome of the accepted projects. The most popular project topic received fourteen different proposals! ↩

Rust Blog
~3 min readApr 16, 2026

Announcing Rust 1.95.0

The Rust team is happy to announce a new version of Rust, 1.95.0. Rust is a programming language empowering everyone to build reliable and efficient software. If you have a previous version of Rust installed via rustup, you can get 1.95.0 with: $ rustup update stable If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.95.0. If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across! What's in 1.95.0 stable cfg_select! Rust 1.95 introduces a cfg_select! macro that acts roughly similar to a compile-time match on cfgs. This fulfills the same purpose as the popular cfg-if crate, although with a different syntax. cfg_select! expands to the right-hand side of the first arm whose configuration predicate evaluates to true. Some examples: cfg_select! { unix => { fn foo() { /* unix specific functionality */ } } target_pointer_width = "32" => { fn foo() { /* non-unix, 32-bit functionality */ } } _ => { fn foo() { /* fallback implementation */ } } } let is_windows_str = cfg_select! { windows => "windows", _ => "not windows", }; if-let guards in matches Rust 1.88 stabilized let chains. Rust 1.95 brings that capability into match expressions, allowing for conditionals based on pattern matching. match value { Some(x) if let Ok(y) = compute(x) => { // Both `x` and `y` are available here println!("{}, {}", x, y); } _ => {} } Note that the compiler will not currently consider the patterns matched in if let guards as part of the exhaustiveness evaluation of the overall match, just like if guards. Stabilized APIs MaybeUninit<[T; N]>: From<[MaybeUninit<T>; N]> MaybeUninit<[T; N]>: AsRef<[MaybeUninit<T>; N]> MaybeUninit<[T; N]>: AsRef<[MaybeUninit<T>]> MaybeUninit<[T; N]>: AsMut<[MaybeUninit<T>; N]> MaybeUninit<[T; N]>: AsMut<[MaybeUninit<T>]> [MaybeUninit<T>; N]: From<MaybeUninit<[T; N]>> Cell<[T; N]>: AsRef<[Cell<T>; N]> Cell<[T; N]>: AsRef<[Cell<T>]> Cell<[T]>: AsRef<[Cell<T>]> bool: TryFrom<{integer}> AtomicPtr::update AtomicPtr::try_update AtomicBool::update AtomicBool::try_update AtomicIn::update AtomicIn::try_update AtomicUn::update AtomicUn::try_update cfg_select! mod core::range core::range::RangeInclusive core::range::RangeInclusiveIter core::hint::cold_path <*const T>::as_ref_unchecked <*mut T>::as_ref_unchecked <*mut T>::as_mut_unchecked Vec::push_mut Vec::insert_mut VecDeque::push_front_mut VecDeque::push_back_mut VecDeque::insert_mut LinkedList::push_front_mut LinkedList::push_back_mut Layout::dangling_ptr Layout::repeat Layout::repeat_packed Layout::extend_packed These previously stable APIs are now stable in const contexts: fmt::from_fn ControlFlow::is_break ControlFlow::is_continue Destabilized JSON target specs Rust 1.95 removes support on stable for passing a custom target specification to rustc. This should not affect any Rust users using a fully stable toolchain, as building the standard library (including just core) already required using nightly-only features. We're also gathering use cases for custom targets on the tracking issue as we consider whether some form of this feature should eventually be stabilized. Other changes Check out everything that changed in Rust, Cargo, and Clippy. Contributors to 1.95.0 Many people came together to create Rust 1.95.0. We couldn't have done it without all of you. Thanks!

Rust Blog
~2 min readApr 4, 2026

docs.rs: building fewer targets by default

Building fewer targets by default On 2026-05-01, docs.rs will make a breaking change to its build behavior. Today, if a crate does not define a targets list in its docs.rs metadata, docs.rs builds documentation for a default list of five targets. Starting on 2026-05-01, docs.rs will instead build documentation for only the default target unless additional targets are requested explicitly. This is the next step in a change we first introduced in 2020, when docs.rs added support for opting into fewer build targets. Most crates do not compile different code for different targets, so building fewer targets by default is a better fit for most releases. It also reduces build times and saves resources on docs.rs. This change only affects: new releases rebuilds of old releases How is the default target chosen? If you do not set default-target, docs.rs uses the target of its build servers: x86_64-unknown-linux-gnu. You can override that by setting default-target in your docs.rs metadata: [package.metadata.docs.rs] default-target = "x86_64-apple-darwin" How do I build documentation for additional targets? If your crate needs documentation to be built for more than the default target, define the full list explicitly in your Cargo.toml: [package.metadata.docs.rs] targets = [ "x86_64-unknown-linux-gnu", "x86_64-apple-darwin", "x86_64-pc-windows-msvc", "i686-unknown-linux-gnu", "i686-pc-windows-msvc" ] When targets is set, docs.rs will build documentation for exactly those targets. docs.rs still supports any target available in the Rust toolchain. Only the default behavior is changing.

Rust Blog
~5 min readApr 4, 2026

Changes to WebAssembly targets and handling undefined symbols

Rust's WebAssembly targets are soon going to experience a change which has a risk of breaking existing projects, and this post is intended to notify users of this upcoming change, explain what it is, and how to handle it. Specifically, all WebAssembly targets in Rust have been linked using the --allow-undefined flag to wasm-ld, and this flag is being removed. What is --allow-undefined? WebAssembly binaries in Rust today are all created by linking with wasm-ld. This serves a similar purpose to ld, lld, and mold, for example; it takes separately compiled crates/object files and creates one final binary. Since the first introduction of WebAssembly targets in Rust, the --allow-undefined flag has been passed to wasm-ld. This flag is documented as: --allow-undefined Allow undefined symbols in linked binary. This options is equivalent to --import-undefined and --unresolved-symbols=ignore-all The term "undefined" here specifically means with respect to symbol resolution in wasm-ld itself. Symbols used by wasm-ld correspond relatively closely to what native platforms use, for example all Rust functions have a symbol associated with them. Symbols can be referred to in Rust through extern "C" blocks, for example: unsafe extern "C" { fn mylibrary_init(); } fn init() { unsafe { mylibrary_init(); } } The symbol mylibrary_init is an undefined symbol. This is typically defined by a separate component of a program, such as an externally compiled C library, which will provide a definition for this symbol. By passing --allow-undefined to wasm-ld, however, it means that the above would generate a WebAssembly module like so: (module (import "env" "mylibrary_init" (func $mylibrary_init)) ;; ... ) This means that the undefined symbol was ignored and ended up as an imported symbol in the final WebAssembly module that is produced. The precise history here is somewhat lost to time, but the current understanding is that --allow-undefined was effectively required in the very early days of introducing wasm-ld to the Rust toolchain. This historical workaround stuck around till today and hasn't changed. What's wrong with --allow-undefined? By passing --allow-undefined on all WebAssembly targets, rustc is introducing diverging behavior between other platforms and WebAssembly. The main risk of --allow-undefined is that misconfiguration or mistakes in building can result in broken WebAssembly modules being produced, as opposed to compilation errors. This means that the proverbial can is kicked down the road and lengthens the distance from where the problem is discovered to where it was introduced. Some example problematic situations are: If mylibrary_init was typo'd as mylibraryinit then the final binary would import the mylibraryinit symbol instead of calling the linked mylibrary_init C symbol. If mylibrary was mistakenly not compiled and linked into a final application then the mylibrary_init symbol would end up imported rather than producing a linker error saying it's undefined. If external tooling is used to process a WebAssembly module, such as wasm-bindgen or wasm-tools component new, these tools don't know what to do with "env" imports by default and they are likely to provide an error message of some form that isn't clearly connected back to the original source code and where the symbols was imported from. For web users if you've ever seen an error along the lines of Uncaught TypeError: Failed to resolve module specifier "env". Relative references must start with either "/", "./", or "../". this can mean that "env" leaked into the final module unexpectedly and the true error is the undefined symbol error, not the lack of "env" items provided. All native platforms consider undefined symbols to be an error by default, and thus by passing --allow-undefined rustc is introducing surprising behavior on WebAssembly targets. The goal of the change is to remove this surprise and behave more like native platforms. What is going to break, and how to fix? In theory, not a whole lot is expected to break from this change. If the final WebAssembly binary imports unexpected symbols, then it's likely that the binary won't be runnable in the desired embedding, as the desired embedding probably doesn't provide the symbol as a definition. For example, if you compile an application for wasm32-wasip1 if the final binary imports mylibrary_init then it'll fail to run in most runtimes because it's considered an unresolved import. This means that most of the time this change won't break users, but it'll instead provide better diagnostics. The reason for this post, however, is that it's possible users could be intentionally relying on this behavior. For example your application might have: unsafe extern "C" { fn js_log(n: u32); } // ... And then perhaps some JS code that looks like: let instance = await WebAssembly.instantiate(module, { env: { js_log: n => console.log(n), } }); Effectively it's possible for users to explicitly rely on the behavior of --allow-undefined generating an import in the final WebAssembly binary. If users encounter this then the code can be fixed through a #[link] attribute which explicitly specifies the wasm_import_module name: #[link(wasm_import_module = "env")] unsafe extern "C" { fn js_log(n: u32); } // ... This will have the same behavior as before and will no longer be considered an undefined symbol to wasm-ld, and it'll work both before and after this change. Affected users can also compile with -Clink-arg=--allow-undefined as well to quickly restore the old behavior. When is this change being made? Removing --allow-undefined on wasm targets is being done in rust-lang/rust#149868. That change is slated to land in nightly soon, and will then get released with Rust 1.96 on 2026-05-28. If you see any issues as a result of this fallout please don't hesitate to file an issue on rust-lang/rust.

Rust Blog
~2 min readMar 21, 2026

Security advisory for Cargo

The Rust Security Response Team was notified of a vulnerability in the third-party crate tar, used by Cargo to extract packages during a build. The vulnerability, tracked as CVE-2026-33056, allows a malicious crate to change the permissions on arbitrary directories on the filesystem when Cargo extracts it during a build. For users of the public crates.io registry, we deployed a change on March 13th to prevent uploading crates exploiting this vulnerability, and we audited all crates ever published. We can confirm that no crates on crates.io are exploiting this. For users of alternate registries, please contact the vendor of your registry to verify whether you are affected by this. The Rust team will release Rust 1.94.1 on March 26th, 2026, updating to a patched version of the tar crate (along with other non-security fixes for the Rust toolchain), but that won't protect users of older versions of Cargo using alternate registries. We'd like to thank Sergei Zimmerman for discovering the underlying tar crate vulnerability and notifying the Rust project ahead of time, and William Woodruff for directly assisting the crates.io team with the mitigations. We'd also like to thank the Rust project members involved in this advisory: Eric Huss for patching Cargo; Tobias Bieniek, Adam Harvey and Walter Pearce for patching crates.io and analyzing existing crates; Emily Albini and Josh Stone for coordinating the response; and Emily Albini for writing this advisory.

Rust Blog
~8 min readMar 20, 2026

What we heard about Rust's challenges

Author's note The original version of this article has been retracted. I used an LLM to write the first draft, though this had come after many hours of planning and going through the data and analyses to identify the points to be made, as well as me going through the post line by line, editing into my voice and verifying the wording and scope of the text was accurate. However, many people still felt like the LLM-speak bled through in ways that felt uncomfortable. Given this, I and other members of the Rust Project have decided to retract the post in its entirety. I stand by the content of the post. As I said, the LLM did not decide the points to be made - those were done well in advance of even beginning to write the blog post. And, admittedly, I did need to make edits to dampen the scope of them (in large part because I couldn't find specific quotes to substantiate them, even though I often "felt" that they were true given what I know as a Rust Project member), but in general I (and the Vision Doc team) defined the content, not an LLM. Many people thought that the blog post felt "empty", with no "real substance". While I see the point here, this is unfortunately just how the data played out and goal of this effort. The Vision Doc team conducted ~70 interviews (mostly 1:1), which were the basis for the conclusions in this blog post. This is a lot of data, it's hard to fully capture the essence of them in a single blog post. And yet, it is also not enough data to fully capture the nuance of differences across groups of different types. On top of this, it shouldn't be that unexpected the problems we heard about in these interviews are the same problems that we (and many others) mostly already knew existed. The insight these interviews give us is that they allow us to begin to capture for whom which issues are most prominent. The insights we identify and the conclusions we make are supported by the data we have gathered. When making these posts, the Vision Doc team has tried to stay as neutral as possible, doing our best to not exert bias by making any claims that cannot be supported as stated by the data itself. With drastically more time, I would have loved to pull in data from the ~5500 survey responses we got, which ultimately could help us make stronger claims or conclusions, but unfortunately that is time that I haven't had. That shouldn't diminish the substance of the insights and conclusions we have been able to make though. Wording matters though. And it's clear that to many people, the blog post as-written didn't meet the mark that they want. LLMs are a tool that many people use (including me, obviously) to varying degrees to help do things that they couldn't do before (either for lack of skill, lack of time, or lack of motivation). In this case, I used an LLM to compensate for the lack of time for me to dedicate to sifting through transcripts for the ~70 interviews we did, and the many analyses that followed, to find specific quotes and write an early draft. It certainly did not help that writing and editing of this post happened over the span of about 3 months - meaning that things that "worked" in early edits did not necessarily work in later edits. This all being said, I think that we as a Vision Doc team owe it to the Rust Project and the community to share (at least to some extent) what we have learned here. So, I have taken the original challenges identified by the team (without the recommendations or conclusions) and will provide a brief personal commentary on them. I've chosen to exclude any specific quotes - rather, just focus on the "high level" ideas. So, as a disclaimer, this will mean that the statements here will be much more biased than what we typically want to publish as part of the Vision Doc work. Across the ~70 interviews the Vision Doc team conducted, we heard a lot of complaints. Of course, we tried to keep these interviews pretty high-level, not focusing on any particular technical details. Rather, we wanted to get a general sense of what the difficulties were that people encountered, among the other topics discussed during these interviews. Here, we've identified a few common challenges to most people, and then a few challenges that are more domain-specific. Challenges that are universal We heard a number of things that basically everyone said was an issue for them, in some capacity. Doing things to address these issues could have a universal impact, but that is not to say that these issues necessarily block people from using Rust. The universal challenges, you've definitely heard before. If you write Rust, you've probably encountered them. That's what makes them universal. However, the point is that we share the data that we gather, and the fact that we have learned that these challenges do affect everyone is data in itself: we have sampled different domains, different experience levels, and different backgrounds; and we have found that these challenges exist for everyone. Compilation performance Everybody knows that "compile times" are a thing that Rust is known for. This is an ever-moving target: the Rust Project tracks performance of the compiler on every merged change to track regressions, many people have attempted many times to make substantial progress here, and yet there is always more that we want to or could do. The good news, is that among our interviews, nobody really told us that compilation time currently blocks them. We did hear things to the effect of "if we keep writing more and more Rust code, we may eventually get to a point that compile times are an issue", so that's not to say that we're "in the clear" but it is important to think about how this matters on balance with other challenges that people face. Borrow checking and ownership Again, another thing that Rust is known for. Borrow checking and ownership is a hard topic that basically every beginner struggles with. However, we found that "Rust experts" don't really complain about the borrow checker anymore: it is a challenge that goes away with experience. That's not to say we can't do better for beginners, but it's not clear exactly what that means. Certainly learning materials and compiler error messages help, and these are areas that we've tried in the past and today to sincerely provide the best experience. Despite that, the borrow checker remains a difficult part of the Rust language. We have ongoing efforts to improve the borrow checker, but it's likely that there are (for example) language features that may make this better. (Or worse!) We found previously that what makes Rust great is the balance that we put on reliability, efficiency, and versatility. And, we need to be careful when adjusting something as core as the borrow checker to maintain this balance. Async When conducting our interviews, async was consistently something that many people had issues with. Beginners often said that they basically completely ignore async while learning. People who do use async often said that the choice wasn't always clear, and that even though using async feels like the right choice now, they still encounter issues. Fortunately, unlike performance and the borrow checker, we have a number of clear "next steps" for async (e.g. async fns in traits, async drop, async version of std traits) that will begin to solve these issues and close the gap. Of course, for other things (like the coloring problem), we don't have good "solutions" just quite yet. Ecosystem crates We previously talked about how crates.io creates a wealth of resources for people to turn to, but people still run into issues. For one, when there are crates that do the thing people want, they need to know: which crates do the things they need, which crates can they trust, and which crates are just overall the "best" for them. Further, in some domains and industries, there aren't crates that do what people need; Rust support for some industries are still too immature. Challenges that are domain-specific Though more challenging given the limited diversity in the interviews we conducted, we still were able to find some domain-specific challenges: at least, we were able to hear about some challenges that seem to disproportionately effect some domains over others. Embedded For developers programming for embedded systems, we heard most often about the difficulties that fundamentally boil down to constrained resource management. For example, embedded developers are often unable to use the vast majority of the crate ecosystem, they often have trouble using the standard library, and the debugging experience is generally harder. Things that are "normal" for most Rust developers are oftentimes "special" for embedded developers. Safety-critical We made an entire post about shipping Rust in safety critical systems. The biggest issue for safety-critical developers with Rust is the lack of availability or maturity for tools to certify their Rust code. GUI development The biggest issue heard from GUI developers is compilation time but is slightly different from the general case, because GUI development is so heavily dependent on the visual changes - and so this is a slightly different workflow than just "check if the code compiles and passes tests".

Rust Blog
~5 min readMar 13, 2026

Call for Testing: Build Dir Layout v2

We would welcome people to try and report issues with the nightly-only cargo -Zbuild-dir-new-layout. While the layout of the build dir is internal-only, many projects need to rely on the unspecified details due to missing features within Cargo. While we've performed a crater run, that won't cover everything and we need help identifying tools and process that rely on the details, reporting issues to these projects so they can update to the new layout or support them both. How to test this? With at least nightly 2026-03-10, run your tests, release processes, and anything else that may touch build-dir/target-dir with the -Zbuild-dir-new-layout flag. For example: $ cargo test -Zbuild-dir-new-layout Note: if you see failures, the problem may not be isolated to just -Zbuild-dir-new-layout. With Cargo 1.91, users can separate where to store intermediate build artifacts (build-dir) and final artifacts (still in target-dir). You can verify this by running with only CARGO_BUILD_BUILD_DIR=build set. We are evaluating changing the default for build-dir in #16147. Outcomes may include: Fixing local problems Reporting problems in upstream tools with a note on the the tracking issue for others Providing feedback on the the tracking issue Known failure modes: Inferring a [[bin]]s path from a [[test]]s path: Use std::env::var_os("CARGO_BIN_EXE_*") for Cargo 1.94+, maybe keeping the inference as a fallback for older Cargo versions Use env!("CARGO_BIN_EXE_*") Build scripts looking up target-dir from their binary or OUT_DIR: see Issue #13663 Update current workarounds to support the new layout Looking up user-requested artifacts from rustc, see Issue #13672 Update current workarounds to support the new layout Library support status as of publish time: assert_cmd: fixed cli_test_dir: Issue #65 compiletest_rs: Issue #309 executable-path: fixed snapbox: fixed term-transcript: Issue #269 test_bin: Issue #13 trycmd: fixed What is not changing? The layout of final artifacts within target dir. Nesting of build artifacts under the profile and the target tuple, if specified. What is changing? We are switching from organizing by content type to scoping the content by the package name and a hash of the build unit and its inputs. Here is an example of the current layout, assuming you have a package named lib and a package named bin, and both have a build script: build-dir/ ├── CACHEDIR.TAG └── debug/ ├── .cargo-lock # file lock protecting access to this location ├── .fingerprint/ # build cache tracking │ ├── bin-[BUILD_SCRIPT_RUN_HASH]/* │ ├── bin-[BUILD_SCRIPT_BIN_HASH]/* │ ├── bin-[HASH]/* │ ├── lib-[BUILD_SCRIPT_RUN_HASH]/* │ ├── lib-[BUILD_SCRIPT_BIN_HASH]/* │ └── lib-[HASH]/* ├── build/ │ ├── bin-[BIN_HASH]/* # build script binary │ ├── bin-[RUN_HASH]/out/ # build script run OUT_DIR │ ├── bin-[RUN_HASH]/* # build script run cache │ ├── lib-[BIN_HASH]/* # build script binary │ ├── lib-[RUN_HASH]/out/ # build script run OUT_DIR │ └── lib-[RUN_HASH]/* # build script run cache ├── deps/ │ ├── bin-[HASH]* # binary and debug information │ ├── lib-[HASH]* # library and debug information │ └── liblib-[HASH]* # library and debug information ├── examples/ # unused in this case └── incremental/... # managed by rustc The proposed layout: build-dir/ ├── CACHEDIR.TAG └── debug/ ├── .cargo-lock # file lock protecting access to this location ├── build/ │ ├── bin/ # package name │ │ ├── [BUILD_SCRIPT_BIN_HASH]/ │ │ │ ├── fingerprint/* # build cache tracking │ │ │ └── out/* # build script binary │ │ ├── [BUILD_SCRIPT_RUN_HASH]/ │ │ │ ├── fingerprint/* # build cache tracking │ │ │ ├── out/* # build script run OUT_DIR │ │ │ └── run/* # build script run cache │ │ └── [HASH]/ │ │ ├── fingerprint/* # build cache tracking │ │ └── out/* # binary and debug information │ └── lib/ # package name │ ├── [BUILD_SCRIPT_BIN_HASH]/ │ │ ├── fingerprint/* # build cache tracking │ │ └── out/* # build script binary │ ├── [BUILD_SCRIPT_RUN_HASH]/ │ │ ├── fingerprint/* # build cache tracking │ │ ├── out/* # build script run OUT_DIR │ │ └── run/* # build script run cache │ └── [HASH]/ │ ├── fingerprint/* # build cache tracking │ └── out/* # library and debug information └── incremental/... # managed by rustc For more information on these Cargo internals, see the mod layout documentation. Why is this being done? ranger-ross has worked tirelessly on this as a stepping stone to cross-workspace caching which will be easier when we can track each cacheable unit in a self-contained directory. This also unblocks work on: Automatic cleanup of stale build units to keep disks space use constant over time More granular locking so cargo test and rust-analyzer don't block on each other Along the way, we found this helps with: Build performance as the intermediate artifacts accumulate in deps/ Content of deps/ polluting PATH during builds on Windows Avoiding file collisions among intermediate artifacts While the Cargo team does not officially endorse sharing a build-dir across workspaces, that last item should reduce the chance of encountering problems for those who choose to. Future work We will use the experience of this layout change to help guide how and when to perform any future layout changes, including: Efforts to reduce path lengths to reduce risks for errors for developers on Windows Experimenting with moving artifacts out of the --profile and --target directories, allowing sharing of more artifacts where possible In addition to narrowing scope, we did not do all of the layout changes now because some are blocked on the lock change which is blocked on this layout change. We would also like to work to decouple projects from the unspecified details of build-dir.