Signal Hub logoSignal Hub

Go news and articles

Go
Updated just now
Date
Source

6 articles

Dev.to (Rust/Go)
~16 min readMay 6, 2026

Building The Go Engineer: Teaching Go as a Software Engineering Discipline

Originally published on The Go Engineer: Building The Go Engineer: Teaching Go as a Software Engineering Discipline Most Go learning material teaches the language as a sequence of syntax lessons. Variables. Slices. Structs. Interfaces. Goroutines. Channels. HTTP handlers. Maybe a database example near the end. That kind of material is useful. I learned from resources like that too. But at some point, I started noticing a gap. You can finish many Go tutorials and still not know how to think about service boundaries, tenant isolation, graceful shutdown, background workers, migrations, rate limiting, observability, validation, CI, or how to keep documentation and code from drifting apart. That gap is why I built The Go Engineer. I did not want to build another folder of Go snippets. I wanted to build a repository-first Go engineering curriculum. The core idea behind the project is simple: A repository should not only contain the curriculum. The repository should be the curriculum. That single idea shaped almost every decision: the curriculum structure, the machine-readable registry, the validator, the code standards, the tests, the CI workflow, the documentation, the known limitations, and the flagship backend project called Opslane. The Go Engineer is my attempt to teach Go as engineering, not just programming. Go is a small language, but production Go is not small. The syntax is intentionally simple. That is one of Go’s strengths. But the hard parts of backend engineering usually live somewhere else: How should packages be organized? Where should transactions begin and end? How should request cancellation flow through a system? Who owns a goroutine? Who closes a channel? What happens when the process receives SIGTERM? How should a service expose metrics? How should migrations stay synchronized? How do you prevent tenant data leaks? How do you prove the repository still matches its own documentation? Most beginner tutorials do not need to answer those questions. A serious engineering curriculum eventually does. That is why The Go Engineer is structured as more than a language walkthrough. The early sections teach fundamentals, but the deeper goal is to help learners build the instincts required to read, design, test, review, and maintain Go systems. The curriculum is not just asking: β€œDo you understand this Go feature?” It is also asking: β€œCan you use this feature inside a system that has boundaries, failures, tests, documentation, deployment, and maintenance pressure?” That is the difference between learning Go syntax and becoming a Go engineer. One of the most important decisions I made was to treat the repository itself as a teaching surface. The Go Engineer is not only a place where lessons live. It is a system with: a locked curriculum architecture a machine-readable curriculum registry runnable lessons README-first explanations tests and verification surfaces code standards testing standards CI validation a curriculum validator known limitations a flagship backend project That matters because real engineering work is never only about code. A real project has many surfaces that must stay aligned: source code tests documentation examples module maps migration files CI workflows architecture decisions release notes contribution rules validation scripts When those surfaces drift, trust drops. The README says one thing. The code does another. The validator misses it. The learner gets stuck. The maintainer loses confidence. The repository stops teaching clearly. I wanted The Go Engineer to fight that drift directly. So I started treating consistency as something the repository should enforce, not something I should simply remember. That is why the project has proof surfaces. A proof surface is any part of a repository that helps prove the system is still what it claims to be. In The Go Engineer, proof surfaces include tests, CI, curriculum metadata, validation scripts, progress maps, documentation standards, module READMEs, and known limitations. That may sound strict for an educational project. I think it is exactly the point. Good engineering is not about having no constraints. It is about building the right constraints into the system. The flagship project in The Go Engineer is called Opslane. I built Opslane because isolated exercises are not enough. Exercises are useful for learning individual concepts. But they do not force enough system-level decisions. A real backend, even an educational backend, forces questions like: Where does configuration live? How does the application start? How are dependencies wired? Where does authentication belong? How is tenant scope enforced? How are migrations applied? How are background workers stopped? How are metrics exposed? How does rate limiting work across multiple instances? How does the service shut down safely? What should be production-grade, and what should remain a teaching implementation? That is why Opslane exists. Opslane is where the curriculum becomes concrete. It is not a toy β€œhello world” API. It is a production-shaped backend that brings together configuration, PostgreSQL, authentication, tenant isolation, order workflows, payments, caching, workers, observability, rate limiting, migrations, Docker, CI, and graceful shutdown. The goal is not to pretend Opslane is a drop-in SaaS template. The goal is to teach the shape of production code. That distinction matters. One of my favorite principles in Go is this: Architecture should make ownership visible. That idea shows up throughout Opslane. The project uses a clear application entrypoint and internal implementation boundaries: cmd/server internal/auth internal/config internal/db internal/events internal/handlers internal/logging internal/metrics internal/middleware internal/otel internal/payment internal/ratelimit internal/services internal/workers The exact folder names are less important than the lesson behind them. When someone opens the project, I want them to quickly understand: Where does the server start? Where does configuration load? Where does authentication live? Where are persistence boundaries? Where are background workers? Where are metrics collected? Where is rate limiting enforced? Where is shutdown coordinated? A codebase should not make readers reverse-engineer ownership. Good structure reduces guesswork. That is especially important in a learning repository. The structure teaches before the code is even read. In Opslane, dependency wiring is intentionally direct. The server builds its dependencies in one visible place: database, store, services, event bus, worker pools, metrics, tracing, rate limiter, and HTTP application. That style is not flashy. That is why I like it. For teaching code, explicit composition is more valuable than hiding everything behind framework magic. When dependencies are assembled directly, a learner can ask useful questions: Who owns the database connection? Who owns the metrics registry? Who owns the root application context? Who stops the workers? What happens if a worker pool fails to start? What does the HTTP layer depend on? What is optional, and what is required? These questions are not distractions from engineering. They are engineering. I want learners to see that a backend is not only a collection of handlers. It is a runtime system with ownership, lifecycle, and failure modes. One of the core backend lessons in Opslane is tenant isolation. Many simple examples start with: Find user by email. That is fine for a small demo. But in a tenant-aware system, identity is not just about the user. It is also about the boundary where that user is allowed to exist. A better question is: Find this user inside this tenant boundary. That is why tenant scope appears throughout Opslane: in the models, repository contracts, authentication flow, handlers, and service methods. This is a deliberate teaching choice. Tenant isolation should not be sprinkled into a codebase later as a patch. It should be part of the system’s shape from the beginning. That is the lesson I want learners to absorb: Security and tenancy are architectural concerns, not decorations. One important hardening step in Opslane was making migrations more explicit. The project now has formal SQL migrations for tenants, users, orders, payments, seed data, and rate limits. It also keeps startup migrations aligned with those SQL files. That alignment matters. Migration drift is one of those problems that looks small until it breaks a real environment. The application starts one way. The manual migration path behaves another way. A table exists locally but not in CI. A feature depends on a schema that only one path creates. That is not just a database problem. It is a proof problem. If the repository claims to teach production-shaped backend engineering, migrations must be treated as a first-class surface. That is why I added validation to detect Opslane consistency issues, including progress drift and migration drift. The repository should not rely on me remembering to keep those surfaces aligned. The repository should help prove they are aligned. A backend that cannot explain itself is not production-shaped. That is why Opslane includes structured logging, correlation IDs, metrics, and tracing concepts. But adding observability packages is not enough. A common mistake is to create observability code and never wire it into the running application. That is not observability. That is unused code. Opslane now wires metrics into the HTTP stack and exposes a Prometheus-compatible /metrics endpoint. The application records request counts, response classes, and latency histograms. The metrics are not just sitting in a package. They are part of the server path. The project also includes an OpenTelemetry teaching boundary. The tracer is wired into application startup, but the OTLP export remains intentionally documented as a teaching stub. That is an important distinction. I want the repository to be honest about what is production-shaped and what is intentionally simplified. A learning project should not pretend a stub is a complete production exporter. It should say: This is the concept. This is what it teaches. This is where production systems go further. That honesty is part of the curriculum. Another hardening step was rate limiting. Opslane originally had an in-memory rate limiter. That is useful for teaching the concept, but it has an obvious limitation: each process has its own counters. That is not enough for a horizontally scaled service. So Opslane now includes a PostgreSQL-backed rate limiter. The running API wires that limiter into the request path. This teaches a better backend lesson: Rate limiting is not only an algorithm. It is a deployment concern. If you run multiple instances, rate limit state needs to be shared or intentionally scoped. Opslane uses PostgreSQL for this because the project already uses PostgreSQL as its system of record. That avoids adding Redis or another dependency just to teach the distributed rate-limiting concept. The rate limiter also uses trusted-proxy-aware IP extraction. That matters because X-Forwarded-For should not be trusted blindly. A service should only honor forwarded headers when the direct peer is a trusted proxy. That is the kind of detail that turns a simple middleware into a real engineering lesson. Graceful shutdown is one of the best teaching surfaces in backend engineering. A beginner might think shutdown means: The process received SIGTERM. A backend engineer needs to think differently: The service must stop accepting new work, let in-flight requests finish, stop event publishing, drain background workers, cancel the application context, release resources, and exit within a bounded time. Opslane models that explicitly. The server has a shutdown coordinator that: listens for termination signals marks the app as draining lets /health report the drain state shuts down the HTTP server with a configured timeout closes the event bus to new publications cancels the root application context drains worker pools lets the main goroutine close final resources This is not glamorous code. It is important code. Bad shutdown behavior causes dropped work, unreliable deployments, broken assumptions, and confusing production incidents. That is why I wanted graceful shutdown to be part of the flagship. A service lifecycle should be visible. The Go Engineer uses CI as another teaching surface. The project does not only say β€œwrite tests.” It runs: go build ./... go vet ./... gofmt checks go mod tidy checks go test ./... go test -race ./... govulncheck ./... go test -coverprofile=coverage.out ./... go run ./scripts/validate_curriculum.go docker build ... It also enforces a coverage threshold and keeps benchmarks in a separate workflow. This matters because CI teaches engineering priorities. If CI only checks that code compiles, the repository teaches that compilation is enough. I do not want that. I want the repository to teach that release-quality work needs stronger evidence: formatting static checks tests race detection vulnerability scanning coverage curriculum validation Docker build validation benchmark visibility CI is not just automation. CI is a statement about what the project refuses to silently break. One of the most important parts of The Go Engineer is the curriculum validator. A repository-first curriculum has a lot of structure. That structure can drift. Lesson paths can break. Run commands can become stale. README links can point nowhere. Curriculum metadata can disagree with folders. Module progress can lie. Migration documentation can fall behind implementation. The validator exists to catch those problems. Recently, I extended validation deeper into Opslane itself. The validator now checks that Opslane progress surfaces stay aligned and that migration surfaces do not drift silently. That is the kind of tooling I want this project to model. A mature repository does not depend only on human memory. It encodes important expectations into checks. This is one of the biggest lessons I want learners to take away: Quality should not depend only on personal discipline. Quality should be supported by the shape of the repository. I added a KNOWN_LIMITATIONS.md document because I do not want The Go Engineer to pretend that every teaching implementation is production-complete. That would be dishonest. Some implementations are intentionally simplified to make the underlying mechanics visible. For example: The custom JWT-compatible token manager teaches signing, base64url encoding, and identity extraction. In production, you would usually use mature libraries or managed identity infrastructure. The in-memory metrics registry teaches counters, histograms, synchronization, and instrumentation mechanics. In production, you would usually use the official Prometheus client or OpenTelemetry metrics. The worker pools teach bounded concurrency and graceful draining. In production, durable background work often needs a queue or outbox pattern. The event bus teaches in-process publish/subscribe boundaries. In distributed systems, you would likely use Kafka, NATS, EventBridge, or another external event backbone. The OpenTelemetry exporter boundary teaches tracing concepts, but the final network dispatch is intentionally stubbed for educational clarity. This is not a weakness. It is part of the teaching contract. Learners should know which parts are production-shaped, which parts are simplified, and what production systems usually require next. That distinction builds judgment. And judgment is the real goal. Another important part of the project is licensing. The Go Engineer is source-available for personal, educational, and non-commercial use. Commercial use requires permission. That means I should be careful with language. I do not describe the project as traditional open source in the strict sense. I describe it as: A source-available educational curriculum. That framing is more accurate and more honest. Good engineering communication includes accurate project positioning, not only accurate code. I do not like calling software β€œperfect.” Perfect is not how serious engineering works. A better standard is release-quality. For The Go Engineer, release-quality means: The curriculum architecture is locked. The machine-readable registry is validated. Lessons have runnable surfaces. Documentation is intentionally structured. Opslane modules have clear progress surfaces. The flagship backend integrates real backend concerns. Known limitations are documented. CI checks build, tests, race conditions, vulnerabilities, coverage, Docker builds, and curriculum consistency. The repository can prove more of its own claims. That is the bar I care about. Not perfection. Proof. Building The Go Engineer taught me something important: A curriculum is not only the content. A curriculum is also: the path the structure the validation the examples the mistakes it prevents the questions it forces the boundaries it makes visible the proof it requires before moving forward That is why I keep coming back to the same idea: The repository is the product. A repository teaches through everything it exposes. The README teaches. The folder structure teaches. The CI workflow teaches. The tests teach. The comments teach. The validator teaches. The known limitations teach. The flagship project teaches. Even the constraints teach. If the repository is going to teach, it should teach intentionally. That is what I am trying to build with The Go Engineer. The Go Engineer is for people who want to move beyond syntax. It is for learners who do not only want to know how to write Go code, but how to reason about Go systems. It is for engineers who want to understand: package boundaries backend architecture HTTP APIs PostgreSQL persistence migrations tenant isolation authentication service workflows payment reliability background workers caching metrics tracing rate limiting graceful shutdown CI validation release discipline It is also for me. Because building this project forces me to be more disciplined. If I claim the repository is the curriculum, then every inconsistency matters. That pressure is useful. It makes the project better. I built The Go Engineer because I wanted to teach Go beyond syntax. I wanted a curriculum that moves from language fundamentals into the decisions engineers actually make when building backend systems: boundaries, lifecycle, persistence, concurrency, observability, validation, security, deployment, and maintenance. Opslane exists because isolated examples are not enough. At some point, learners need to see how decisions interact inside one integrated backend. The validator exists because documentation, metadata, examples, and implementation drift unless the repository actively checks them. The CI pipeline exists because release quality needs evidence. The known limitations exist because honest teaching should explain where simplified implementations stop and production systems begin. The Go Engineer is not just a Go tutorial. It is a repository-first Go engineering curriculum. And the biggest lesson I learned while building it is this: A repository is not just where the curriculum lives. The repository is the curriculum. If you want to follow the project, explore the repository here: / the-go-engineer The Go Engineer The Go Engineer is a repository-first Go software engineering curriculum. It teaches Go by combining runnable lessons, production-shaped examples, tests, validation, and a final integrated backend project. The stable v2.1 line is organized as a 5-phase, 12-section learning system with 215 registered curriculum items. The public structure is locked in ARCHITECTURE.md, and the machine-readable registry is curriculum.v2.json. Status Current stable line: v2.1.x Current stable release: v2.1.1 Supported branches: Branch Purpose main active post-v2.1 implementation and integration line release/v2 stable v2.1.x maintenance line release/v1 stable v1 maintenance line Architecture v2.1 is locked. Normal work may improve lessons, tests, documentation, validators, or the Opslane flagship implementation, but must not add, remove, rename, or reorder public root sections without explicit maintainer approval. Quick Start Requirements: Go version declared in go.mod CGO-capable C compiler for go test -race ./... and SQLite-backed paths git clone https://github.com/swe-labs/the-go-engineer.git cd the-go-engineer go mod … View on GitHub You can also read the original article on Hashnode: The Go Engineer Blog

Dev.to (Rust/Go)
~3 min readMay 6, 2026

πŸš€ Stop Chasing Small Docker Images: What Actually Matters for Go in Production

A practical guide to reproducible builds, faster CI pipelines, and debuggable containers for Go engineers Most Docker + Go tutorials end the same way: β€œUse multi-stage builds, switch to Alpine, done.” That advice works until it doesn’t. At scale, different problems show up: CI pipelines slow down unpredictably Builds stop being reproducible Debugging minimal containers becomes painful Monorepos destroy Docker cache efficiency This article focuses on what actually matters in production: reproducibility, caching, and operability Before optimizing, it helps to visualize what’s happening. β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Source Code β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ go.mod/sum β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ go mod download β”‚ β”‚ (dependency layer) β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ go build β”‚ β”‚ (compile layer) β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Final Image β”‚ β”‚ (distroless/scratch) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ If go.mod changes β†’ everything below it rebuilds Same code, different builds: Different architectures (amd64 vs arm64) Embedded file paths Environment-dependent outputs go build -trimpath -o app go mod download Optional stricter control: GOPROXY=https://proxy.golang.org GONOSUMDB=* FROM golang:1.26 AS builder ENV CGO_ENABLED=0 WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . . RUN go build -trimpath -ldflags="-s -w" -o app β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Source β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β–Ό β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ linux/amd64 β”‚ β”‚ linux/arm64 β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β–Ό β–Ό Binary A Binary B (different hash) (different hash) πŸ‘‰ Focus on behavior consistency, not identical binaries. Layer 1: OS Base Image Layer 2: go.mod / go.sum Layer 3: Dependencies (go mod download) Layer 4: Source Code Layer 5: Build Output Change in go.mod ↓ Layer 2 invalidated ↓ Layer 3 re-runs (slow) ↓ Everything rebuilds β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ go.mod/sum β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β–Ό (cached via BuildKit mount) β–Ό Dependencies reused βœ… β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Source Code β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β–Ό Build runs faster ⚑ RUN --mount=type=cache,target=/go/pkg/mod \ go mod download RUN --mount=type=cache,target=/root/.cache/go-build \ go build -trimpath -ldflags="-s -w" -o app COPY services/service-a/go.mod services/service-a/go.sum ./ RUN go mod download scratch β†’ smallest β†’ hardest to debug ❌ distroless β†’ balanced β†’ production-ready βœ… alpine β†’ larger β†’ easiest debugging πŸ”§ No TLS certs β†’ HTTPS fails No shell β†’ cannot debug No timezone/DNS tools Secure and minimal But still no shell Debuggable But uses musl libc (can cause subtle issues) Production β†’ distroless Debug build β†’ alpine Special case β†’ scratch FROM golang:1.26 AS builder WORKDIR /app ENV CGO_ENABLED=0 COPY go.mod go.sum ./ RUN --mount=type=cache,target=/go/pkg/mod \ go mod download COPY . . RUN --mount=type=cache,target=/root/.cache/go-build \ go build \ -trimpath \ -ldflags="-s -w" \ -o app FROM gcr.io/distroless/base-debian12 WORKDIR / COPY --from=builder /app/app /app USER nonroot:nonroot ENTRYPOINT ["/app"] FROM alpine:3.19 RUN apk add --no-cache ca-certificates COPY --from=builder /app/app /app ENTRYPOINT ["/app"] Most engineers optimize for: Image size Build completion But production systems care about: Reproducibility β†’ Can I trust this build? Debuggability β†’ Can I fix issues fast? Performance β†’ Can CI scale? If your Docker setup feels: slow fragile hard to debug …it’s not Docker. It’s how: caching dependencies and runtime assumptions interact.

Dev.to (Rust/Go)
~7 min readMay 6, 2026

Python vs Go vs Rust for AI Agents in 2026: A Pragmatic Field Guide

The AI agent ecosystem has a language problem that nobody talks about directly: the tutorials and frameworks are all Python, but production agent systems increasingly lean on Go and Rust for the infrastructure layer. A GDE just published "Stop Using Python for Your Gen AI Apps, Use Go" using Google's Genkit. Meanwhile, Rust frameworks like echo-agent, rustic_ai, and Aura ship with features that LangChain users would recognize instantly. And Python's LangGraph and CrewAI still dominate the orchestration space. The truth is more nuanced than any single-language take. Each language has a distinct role, and the best production systems use at least two of them together. This guide helps you decide where each one fits, with real project examples and code snippets so you can evaluate the tradeoffs yourself. Aspect Python Go Rust Ecosystem maturity LangChain, CrewAI, AutoGen, LlamaIndex - 4+ years of agent frameworks Genkit, Eino, Phero - emerging (2025-2026) echo-agent, rustic_ai, Aura, cinch-rs - very early but feature-rich Binary size 80-150 MB (with runtime) ~18 MB (static binary) ~5 MB (static binary) Memory idle 80-150 MB (FastAPI) 10-20 MB 5-10 MB Cold start 200-500ms (import time) <10ms <5ms Concurrency asyncio (cooperative, single-threaded default) Goroutines (2KB stacks, N:M scheduling) async tasks (zero-cost, no runtime overhead) Type safety Optional (gradual with mypy/pydantic) Structural (compile-time) Nominal (compile-time, zero-cost abstractions) Tool calling Decorators + pydantic Reflection + struct tags Proc macros + derive macros Dependency count 20-50 indirect deps 0-5 direct deps 80-200+ crate deps Prototyping speed Fastest Medium Slowest Production reliability Medium (crash at runtime) High (no runtime surprises) Highest (no undefined behavior) Best for ML pipelines, RAG, fast prototyping API serving, proxies, governance MCP servers, sandboxed execution, high-throughput agents Python's dominance in AI isn't accidental. The model training, fine-tuning, and data science ecosystem is irreplaceable. RAG pipelines. If you're building a retrieval-augmented generation system with embeddings, chunking strategies, and reranking, Python has every library you need: sentence-transformers, chromadb, llama-index, unstructured. None of the Go or Rust equivalents come close. Prototyping. Python lets you sketch an agent idea in 20 lines and iterate. The REPL-driven workflow is unmatched for exploring prompt strategies and tool call patterns. from langchain.agents import create_react_agent, AgentExecutor from langchain.tools import tool @tool def get_weather(city: str) -> str: """Get weather for a city.""" return f"Sunny, 72F in {city}" agent = create_react_agent(llm, [get_weather], prompt) executor = AgentExecutor(agent=agent, tools=[get_weather]) result = executor.invoke({"input": "What's the weather in Tokyo?"}) print(result["output"]) Frameworks that exist. LangGraph's state machine approach, CrewAI's role-based agents, AutoGen's multi-agent conversations -- these are proven patterns with thousands of production deployments. The Go/Rust equivalents are 1-2 years behind in maturity. But here's the catch: Python's production footprint is expensive. A simple FastAPI agent server idles at 80-150 MB of RAM. The cold start on container orchestration is 200-500ms before a single line of business logic runs. For a prototype, none of this matters. For a production system serving thousands of agent sessions, it adds up to real infrastructure cost. The case for Go in agent infrastructure isn't "Go is better than Python." It's that agents are not monoliths. They have layers: the reasoning layer (LLM), the orchestration layer (framework), and the infrastructure layer (transport, policy, memory, tracing). Python dominates the first two. The third layer is systems programming. API serving. Go can handle hundreds of concurrent agent sessions with streaming responses while using 30-60 MB of RAM. An 18 MB Docker image deploys in under a second. Governance proxies. When every tool call from an agent needs to pass through rate limiting, approval workflows, and audit logging, Go's goroutine-per-request model makes this trivial. type AgentProxy struct { policyEngine *PolicyEngine traceExporter *otlp.Exporter rateLimiter *RateLimiter mcpClients map[string]*mcp.Client } func (p *AgentProxy) HandleToolCall(ctx context.Context, req *ToolCall) error { if err := p.rateLimiter.Check(ctx, req.UserID); err != nil { return err } decision, err := p.policyEngine.Evaluate(ctx, req) if err != nil || !decision.Allowed { return err } return p.mcpClients[req.Server].Call(ctx, req) } MCP servers. The Model Context Protocol is fundamentally a concurrency problem: managing multiple stdio subprocesses, each with its own stdin/stdout pair, plus incoming requests from multiple agents. Go channels and goroutines handle this pattern naturally. Google Genkit Go 1.0. Google just shipped Genkit Go as a production-ready framework. It gives Go developers a structured way to build Gen AI apps with streaming, evaluation, and tracing built in. This is the biggest single boost to the Go AI ecosystem in 2026. Rust agent frameworks are younger but ambitious. Projects like echo-agent, rustic_ai, and Aura from Mezmo ship production-grade features that Go and Python ecosystems are still building toward. Sandboxed execution. Rust's WASM support means you can run untrusted agent skills in a sandbox with memory limits, execution timeouts, and no filesystem access. CrossKlaw does exactly this. A2A protocol. echo-agent ships a full Agent-to-Agent protocol implementation, letting agents discover each other, hand off tasks, and collaborate across frameworks. This is the same pattern Google proposed with A2A, but native in Rust. use echo_agent::prelude::*; #[tool(name = "search", description = "Search the web")] async fn search(query: String) -> Result<ToolResult> { Ok(ToolResult::success(format!("Results for: {query}"))) } #[tokio::main] async fn main() -> Result<()> { let mut agent = agent! { model: "qwen3-max", system_prompt: "You are a research assistant", tools: [SearchTool], }?; let answer = agent.execute("What's new in AI this week?").await?; println!("{answer}"); Ok(()) } Where Rust hurts. The ecosystem is fragmented across competing frameworks. Dependency graphs balloon to 150+ crates. Prototyping is slow -- you pay the type system tax upfront. And the pool of developers who know both Rust and AI tooling is tiny. All three languages share a common gap: once you're running agents in production, you need a layer that handles scheduling, execution environments, monitoring, and multi-agent coordination -- without writing it yourself. This is where platforms like Nebula come in. Nebula gives you the orchestration runtime so your agents can be written in whatever language makes sense for their job -- Python for RAG, Go for the API proxy, Rust for the sandboxed executor -- while the platform handles deployment, secrets, triggers, and cross-agent communication. You don't have to choose one language. You choose the right language for each component, and the orchestration layer ties them together. Use Python when: You're prototyping or iterating fast The task centers on ML inference, embedding, or RAG You need the largest possible community and ecosystem You're fine with 80-150 MB per service instance Use Go when: You're building the serving/infrastructure layer Cold start time and memory budget matter (containers, serverless) You need a governance proxy, policy engine, or MCP bridge You want a single binary deploy with zero runtime dependencies Use Rust when: You need WASM sandboxing for untrusted code Memory safety is a hard requirement (security-critical agent paths) You want compile-time guarantees on tool input/output schemas You're willing to accept slower iteration for maximum production reliability The "Python vs Go" framing is a false choice. Production AI agent systems in 2026 look like this: a Python RAG pipeline feeds context into a Go API server that enforces governance and routes tool calls, and a Rust sandbox runs untrusted code in WASM. Each component uses the language best suited to its job. The frameworks are catching up faster than most people realize. Google Genkit Go 1.0, echo-agent's feature parity with LangGraph, and Aura's production-ready MCP runtime all landed in the last six months. Choose your stack by the layer, not by the hype. This article is part of the "Developer Tool Showdowns" series -- practical comparisons to help you make informed engineering decisions.

Dev.to (Rust/Go)
~12 min readMay 6, 2026

Stop Using Python for Your Gen AI Apps, Use Go and Genkit Instead

Introduction For the last few years, every Gen AI tutorial, framework, and "hello world" has assumed one thing: that you are writing Python. It made sense at the start. The research community lives in Python, the model providers ship Python SDKs first, and the notebook culture is hard to beat for prototyping. But there is a quiet, important shift happening in 2026: the teams actually shipping AI features at scale are increasingly moving their production Gen AI workloads off Python, and onto languages built for services. Go is at the center of that shift. And Genkit Go, the Go flavor of Google's open-source Gen AI framework, is the cleanest path I have seen to build production-ready AI services in Go: typed flows, structured output, built-in HTTP serving, observability, and a Developer UI, all from a single binary. This article is two things at once. First, an honest argument about why Python is a poor fit for production Gen AI services. Second, a hands-on getting-started with Genkit Go so you can replace that Python microservice this week. Python is great for research and prototyping. But Gen AI applications are not really "AI code", they are mostly I/O-heavy network services that happen to call a model. And that is exactly where Python struggles. Gen AI workloads are dominated by long, concurrent network calls: streaming completions, tool calls, embedding requests, vector DB lookups, MCP servers. Go's goroutines and channels were literally designed for this. In Python you have a choice between three uncomfortable options: threads (limited by the GIL), asyncio (which infects your entire codebase and breaks the moment one library is sync), or multiprocessing (heavy, awkward, and unfriendly to shared state). None of them feel native. All of them leak through your abstractions. A Python AI service typically pulls in pydantic, httpx, an SDK or two, and a tokenizer. You are easily looking at 200, 400 MB of resident memory and several seconds of cold start before you serve a single request. A Go binary doing the same job is one statically linked file, tens of MB of RAM, and starts in milliseconds. On Cloud Run, Lambda, Azure Functions, or any autoscaling platform, this difference is not a micro-optimization, it is the difference between a service that scales to zero gracefully and one that does not. pip, poetry, uv, conda, venv, requirements.txt, pyproject.toml. Pin a Torch version, break a transitive dep. Upgrade an SDK, break Pydantic v1 vs v2. Every Python AI repo I have inherited has spent at least a day fixing the environment before running a single prompt. Go's module system, with a single go.mod and go.sum, is boring, reproducible, and just works. Structured output, tool calling, and MCP all rely on schemas. In Python, the schema lives in Pydantic models, in docstrings, in comments, and sometimes in your head. In Go, the schema is the struct. The compiler enforces it. Genkit picks it up automatically via JSON schema tags. You cannot ship a flow whose input does not match what the model returns, because it will not compile. Python deployments are Dockerfiles full of system packages, base images that drift, and "works on my machine" surprises. Go deploys as a single static binary. FROM scratch, copy the binary, done. For AI services that need to run on Cloud Run, on Kubernetes, on the edge, or as a sidecar, that is a massive operational win. Yes, the heavy lifting happens on the model provider's GPUs. But your service still has to parse tokens off a streaming response, fan out tool calls, merge results, enforce timeouts, and push telemetry, per request, at concurrency. Go does that work an order of magnitude more efficiently than CPython, and without you having to think about it. None of this means Python is wrong for research. It means Python is the wrong default for the service that exposes that research to your users. There is one more reason to pick Go in 2026 that did not really exist two years ago: agentic coders. Tools like Claude Code, Cursor's agent mode, GitHub Copilot's agent, Gemini Code Assist, Codex, Aider, and the growing ecosystem of autonomous coding agents are now a real part of how software gets written. And it turns out that Go is the language they thrive in. Why? It comes down to three properties of the language that align almost perfectly with how an LLM-based agent reasons about code: Agentic coders work in a tight loop: write code, compile, read the error, fix, repeat. Go's compiler is fast, strict, and brutally honest. When an agent generates a wrong call, the compiler tells it exactly what is wrong and where, in seconds. In Python, the same mistake might only surface at runtime, three layers deep, with a stack trace that requires the agent to spend tokens reasoning about dynamic behavior. Strong typing turns "guess and pray" into "verify and continue". Python has at least four HTTP clients, three async paradigms, two type systems, and an opinion war about every major design decision. An agent has to choose, and choices cost tokens and increase the chance of going off the rails. Go is famously opinionated: one formatter (gofmt), one module system, one idiomatic way to handle errors, one standard layout. Less surface area means less ambiguity, which means less token consumption and more correct code per iteration. go build, go test, go vet, gopls, and staticcheck produce structured, parseable output. Agents can read it directly without heuristics. Combine that with go doc and the standard library being uniformly documented, and you give an agent a self-describing environment it can navigate without hallucinating. Genkit Go leans into the same properties: Flow inputs and outputs are Go structs, the schema is the type. An agent generating a new flow knows exactly what shape the data has, because the compiler will reject anything else. The API surface is small and consistent: genkit.Init, genkit.DefineFlow, genkit.DefineTool, genkit.GenerateData, genkit.Handler. There is one obvious way to define a flow, one obvious way to expose it, one obvious way to call a model. Tool definitions are typed end-to-end, so an agent writing a new tool gets compile-time guarantees that its signature matches what the runtime expects. The net effect is that an agentic coder pointed at a Genkit Go codebase will produce more correct code, in fewer iterations, with fewer tokens than the same agent pointed at an equivalent Python codebase. In a world where you are increasingly going to be the reviewer of agent-generated code rather than the author, that compounds fast. If you accept the premise that Go is the better runtime for Gen AI services, the next question is: which framework? You can absolutely call the Gemini, OpenAI, or Anthropic SDKs directly from Go. But you will quickly end up rebuilding the same primitives every Genkit user already has for free. Here is what Genkit Go gives you out of the box, and what you would otherwise have to write yourself: | Feature | Without Genkit | With Genkit Go | genkit.Generate(...), one call, multi-provider | genkit.GenerateData[MyStruct], typed Go struct returned | net/http boilerplate per endpoint, request/response wiring | genkit.Handler(flow), auto HTTP endpoint | genkit.DefineTool(...), automatic execution loop | curl, Postman, manual harnesses | Genkit Developer UI, visual flow runner, traces, prompt playground | It is the same philosophy as Genkit Java and the JavaScript flavor I covered in my 2026 JS/TS Gen AI frameworks comparison: a thin, opinionated, cloud-agnostic layer that turns "AI logic" into a typed function you can call, test, deploy, and observe. A Go service exposing a single AI flow that generates a structured recipe from a main ingredient and optional dietary restrictions. It will: Accept a typed RecipeInput as input. Call Gemini 3 Pro via the Google AI plugin. Return a strongly-typed Recipe struct (no manual JSON parsing). Be served as an HTTP endpoint on :3400. Be testable visually in the Genkit Developer UI. All in a single main.go file. No web framework. No code generation. Just Go. Go 1.24+ (install) Node.js 18+ (only required for the Genkit CLI / Developer UI) A Google GenAI API key (free, no credit card, from Google AI Studio) The Genkit CLI is your local companion for running and inspecting flows in the Developer UI: curl -sL cli.genkit.dev | bash Verify it: genkit --version Create a fresh module: mkdir genkit-go-recipes && cd genkit-go-recipes go mod init example/genkit-go-recipes Install the Genkit Go package: go get github.com/firebase/genkit/go Set your API key: export GEMINI_API_KEY=<your API key> main.go Create main.go with the following content. This is the entire service. package main import ( "context" "encoding/json" "fmt" "log" "net/http" "github.com/firebase/genkit/go/ai" "github.com/firebase/genkit/go/genkit" "github.com/firebase/genkit/go/plugins/googlegenai" "github.com/firebase/genkit/go/plugins/server" ) // Input schema, picked up automatically by Genkit and the Dev UI. type RecipeInput struct { Ingredient string `json:"ingredient" jsonschema:"description=Main ingredient or cuisine type"` DietaryRestrictions string `json:"dietaryRestrictions,omitempty" jsonschema:"description=Any dietary restrictions"` } // Output schema, returned directly by the model as a typed Go struct. type Recipe struct { Title string `json:"title"` Description string `json:"description"` PrepTime string `json:"prepTime"` CookTime string `json:"cookTime"` Servings int `json:"servings"` Ingredients []string `json:"ingredients"` Instructions []string `json:"instructions"` Tips []string `json:"tips,omitempty"` } func main() { ctx := context.Background() // Initialize Genkit with the Google AI plugin and a default model. g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}), genkit.WithDefaultModel("googleai/gemini-3-pro"), ) // Define a typed flow: (RecipeInput) -> (Recipe, error) recipeGeneratorFlow := genkit.DefineFlow(g, "recipeGeneratorFlow", func(ctx context.Context, input *RecipeInput) (*Recipe, error) { dietary := input.DietaryRestrictions if dietary == "" { dietary = "none" } prompt := fmt.Sprintf(`Create a recipe with the following requirements: Main ingredient: %s Dietary restrictions: %s`, input.Ingredient, dietary) // Structured generation: Gemini returns a Recipe directly. recipe, _, err := genkit.GenerateData[Recipe](ctx, g, ai.WithPrompt(prompt), ) if err != nil { return nil, fmt.Errorf("failed to generate recipe: %w", err) } return recipe, nil }, ) // Smoke-test the flow once at boot. recipe, err := recipeGeneratorFlow.Run(ctx, &RecipeInput{ Ingredient: "avocado", DietaryRestrictions: "vegetarian", }) if err != nil { log.Fatalf("could not generate recipe: %v", err) } out, _ := json.MarshalIndent(recipe, "", " ") fmt.Println("Sample recipe generated:") fmt.Println(string(out)) // Expose the flow as an HTTP endpoint. mux := http.NewServeMux() mux.HandleFunc("POST /recipeGeneratorFlow", genkit.Handler(recipeGeneratorFlow)) log.Println("Starting server on http://localhost:3400") log.Println("Flow available at: POST http://localhost:3400/recipeGeneratorFlow") log.Fatal(server.Start(ctx, "127.0.0.1:3400", mux)) } Take a moment to count what is not in this file: No web framework. No JSON parsing of the model output. No manual OpenTelemetry setup. No request/response DTO duplication. No Dockerfile yet (we will not need much). The struct is the contract. The flow is the endpoint. The compiler enforces both. go run . You should see a structured recipe printed as JSON, then the server logging that it is listening on :3400. In another terminal, hit it with curl: curl -X POST "http://localhost:3400/recipeGeneratorFlow" \ -H "Content-Type: application/json" \ -d '{"data": {"ingredient": "tomato", "dietaryRestrictions": "vegan"}}' You will get back a fully structured JSON recipe. That is it, you have a production-shaped Gen AI microservice in one file. The Genkit Developer UI is one of the strongest reasons to adopt Genkit, regardless of language. It gives you a local web app to run flows, inspect traces, tweak prompts, and debug tool calls. From the project root: genkit start -- go run . Open http://localhost:4000, pick recipeGeneratorFlow, paste: { "ingredient": "avocado", "dietaryRestrictions": "vegetarian" } Click Run. You will see the typed output and a full trace of the model call: tokens, latency, prompt, response. This is the kind of inner loop Python frameworks are still catching up on. Because it is Go, deployment is almost anticlimactic. A minimal Dockerfile: FROM golang:1.24 AS build WORKDIR /src COPY . . RUN CGO_ENABLED=0 go build -o /out/server . FROM gcr.io/distroless/static COPY --from=build /out/server /server ENV PORT=3400 EXPOSE 3400 ENTRYPOINT ["/server"] That is your entire production image. Deploy it to Cloud Run, Cloud Run Jobs, Kubernetes, AWS Lambda (via container image), Azure Container Apps, or any platform that runs containers. No Python runtime to vendor. No pip install at build time. No virtual environment. Just a binary. If you want to see the same pattern applied to other clouds and languages, I have already covered: Genkit + AWS Lambda + Bedrock Genkit + Azure Functions + AI Foundry Genkit Java 101 Genkit Go fits the same mold, with the smallest runtime footprint of all of them. A few honest objections worth addressing. "All the cool research libraries are in Python." True. Keep them in Python, behind a small Python service that does only the research-y bit. Put your product surface (the part your users actually call) in Go. That separation is healthy. "My team only knows Python." Go is famously the easiest "real" backend language to learn. A Python developer can be productive in Go in days, and Genkit's API surface is small enough that the learning curve is mostly Go itself, not the framework. "What about LangChain / LlamaIndex features?" Most of what those frameworks give you (flows, tools, RAG, prompts, evaluation, observability) Genkit Go gives you too, with a fraction of the surface area and without the abstraction tax. See my 2026 frameworks comparison for the long version. "Is Genkit Go production-ready?" It powers Gen AI features at Google and a growing list of companies. The Go SDK shares the same core philosophy and plugin model as the JS and Java SDKs. It is stable enough to bet on, and the iteration speed is high. Python earned its place as the language of AI research. It did not earn its place as the language of AI services. Those are different problems with different constraints, and the constraints of production services, concurrency, footprint, deployment, types, observability, all favor Go. Genkit Go is the framework that finally makes that switch painless. You get a typed, observable, multi-provider Gen AI service in one file, one binary, and one deploy. If you are still maintaining a Python microservice whose only job is to call an LLM and return structured JSON, you are paying a tax you do not need to pay. Try it on your next flow. Replace one Python service. See how much smaller the resulting system is, in code, in memory, and in operational surface area. Genkit Go, Get Started Genkit Go, Flows Genkit Go, Tool Calling Genkit Go, Deployment on Cloud Run Genkit GitHub

Golang Weekly
~4 min readApr 24, 2026

TinyGo can now compile the TypeScript compiler

#​598 β€” April 24, 2026 Read the WebΒ Version Go Weekly TinyGo 0.41: Go 1.26 Support, ESP32 Wireless, and More β€” A huge release for the β€œGo compiler for small places”! Go 1.26 support arrives, along with wireless support for ESP32 devices, so you can create and run networked services with Go on these tiny devices. There’s also Arduino UNO Q support, and TinyGo can now even compile the TypeScript 7Β compiler. The TinyGo Team Write Better Prompts β€” Join GitHub's Sabrina Goldfarb for this detailed video course on generating higher quality code with AI. Learn practical prompting techniques that work consistently across tools and transform your project ideas intoΒ reality. Frontend Masters sponsor The Standard uuid Package Proposal Has Been Accepted; Possibly Coming in Go 1.27 β€” The proposal for a native uuid package has been accepted and the first commit is already in. UUIDs v4 and v7 are supported. Damien Neil's explainer provides a good read on the rationale and design, or you might prefer Redowan Delowar's higher level look. Damien Neil / Go Proposal Review IN BRIEF: A proposal for a new goroutine leak detector profile has been accepted. Discussion about supporting dependency cooldowns in Go is ongoing. Go 1.27 will drop support for macOS 12 (Monterey). Building a Container from Scratch in Go β€” A developer wanted to understand how Docker containers work under the hood and set out to build a minimal one in Go from scratch, starting with LinuxΒ namespaces. Vedant Gandhi Understanding the Go Runtime: The Network Poller β€” One of JesΓΊs’s typical deep dives, this time on how Go makes blocking network code not actually block a thread. Covers the parking protocol, epoll/kqueue/IOCP, and the observation that β€œwaiting for goroutines and waiting for I/O are the sameΒ waiting.” JesΓΊs Espino Your Agent Hit the 2-Project Limit by Lunch β€” ghost gives your agent unlimited free Postgres. No 2-project cap, no credit card, one CLI. 1TB storage. TryΒ forΒ free. ghost sponsor πŸ“„ Go and Rust Programs Appear to Start Equally Fast (on Some Machines) – The startup difference is on the order ofΒ sub-milliseconds. Chris Siebenmann πŸ“„ Raftly: Building a Production-Grade Raft Implementation from Scratch – With the curious goal of being designed toΒ fail. Anirudh Sharma πŸ“„ Tracing Goroutines in Realtime with eBPF – A beautifully presentedΒ article. Ozan Sazak πŸ›  Code & Tools goshs 2.0: For When python3 -m http.server Doesn't Cut It β€” A Go-powered, single-binary file server you can rapidly deploy not only to get a quick HTTP/S server running, but WebDAV, SFTP, SMB, DNS, and other protocols too. It can also send notifications via webhooks. (GitHubΒ repo.) Patrick Hener TamaGo: Where the Go Runtime Is the Kernel β€” A framework for compiling and executing Go apps on bare metal processors (AMD64, ARM, ARM64, and RISCV64). Former Go core team member Brad Fitzpatrick has just used this to get Tailscale running onΒ UEFI. The TamaGo Authors TypeScript 7.0 Beta: A 10x Faster Compiler, Thanks to Go β€” TypeScript 7.0 is a Go-powered native port of TypeScript's compiler boasting β€œabout 10 times faster” performance. Curiously, Microsoft collaborated with the TinyGo team so it can also be compiled with TinyGo 0.41 (featured above). Microsoft πŸ€–Β Kronk: Hardware-Accelerated Local LLM Inference for Go β€” A local-inference runtime for Go apps, wrapping llama.cpp through yzma bindings and exposing an OpenAI-compatible API. Check out the code for wiring up a simple chat mechanism with it. Bill Kennedy (Ardan Labs) RabbitMQ Stream Go Client 1.8 – Official Go client library for RabbitMQ's streamΒ queues. go-github 85.0 – Client library for the GitHub API v3. πŸ“„ pdfcpu 0.12 – Go-based PDF processing library. linodego 1.68.0 – Go client for Linode's REST API. πŸ’¬ slack-go 0.23 – Official Slack API library. πŸ“°Β Classifieds βš™οΈ Go finally has an AI agent framework that isn't a Python port. Agents as http.Handlers, orchestrate LLMs & Claude Code. Open source.Β agentfield.ai. πŸ“’Β  Elsewhere in the ecosystem Git 2.54 has been released with two headline features: git history offers a new, easier way to edit commit messages or interactively split a commit. You can now define hooks in config files (at repo, user, or system level) rather than only in .git/hooks. You can also run multiple hooks for the same event. Ben Hoyt (creator of GoAWK) is having fun with an indecisive AI coding agent. Ben gives us a real-world example of taking back theΒ reins. Sanghee Son's friend unplugged his Raspberry Pi so he built a homelab manager in Go called homebutler which provides a CLI and MCP server to monitor and control his homelab's servers andΒ network. Cloudflare has released a preview of its new cf CLI tool for working with its various services.

Golang Weekly
~4 min readApr 17, 2026

What it takes to add new syntax to Go

#​597 β€” April 17, 2026 Read the WebΒ Version Go Weekly Let’s Add a Conditional Expression to Go β€” Not a proposal for a real Go feature, but an epic tour through the Go compiler, including the parser, type checker, IR, and the walk desugaring stage, showing what it takes to implement a new syntax feature. Few of us dig this deep, so it’s neat to see it comeΒ together. Matvey Korinenko 44 Postgres Talks To Choose From in One Virtual Event β€” POSETTE: An Event for Postgres 2026 is a free & virtual developer event on 16-18 Jun. All 44 talks stream live & will be available later. Join live to take part in discussions with speakers & attendees. Check out the schedule and mark yourΒ calendar. Microsoft | AMD sponsor How GitHub Uses eBPF from Go to Improve Deployment Safety β€” A nice example of Go being used to build kernel-level tooling. Here, they used ebpf-go to create a circular dependency detectionΒ system. Gripper and Levenstein (GitHub) watgo: A WebAssembly Toolkit for Go β€” A zero-dependency, pure Go toolkit for parsing WAT, validating it, and creating WASM binaries (and decode back, too). It comes as a CLI tool and Go library. A must-see for anyone working with WASM in Go. GitHubΒ repo. Eli Bendersky IN BRIEF: The TinyGo team says its next release, due next Tuesday, is a big one, with Go 1.26 support plus full Arduino UNO Q support. The /r/golang subreddit does a weekly thread focusing on 'small projects' – Go-based projects people want to share that don't necessarily meet the usual quality bar for the sub. 🎀 The Cup o' Go podcast interviewed Creed Haymond of Epic Games (Fortnite!) about Go's role in game infrastructure and how his team is migrating from Spring (Java) to Go. Sky is an Elm-inspired functional language that compiles to Go. Error Translation in Go Services β€” In layered services, storage errors like sql.ErrNoRows can easily leak into HTTP or gRPC handlers, coupling transport to storage. It’s better to define domain sentinels and translate twice: storage to domain in the repository, domain to wire format in theΒ handler. Redowan Delowar πŸ“„ Structuring a Go Service with the Repository Pattern – A worked example of the repository pattern and domain-first projectΒ layout. PaweΕ‚ Grzybek πŸ“„ Building Gemma 4 Local-Powered LLM Apps with Go and Yzma Vladimir Vivien πŸ“„ Parsing 11 Languages in Pure Go Without CGO Gagan Deep Singh πŸ›  Code & Tools Garble: A Toolchain to Obfuscate Go Builds β€” Obfuscation doesn’t guarantee security but if you want your binaries to have β€œas little information about the original source code as possible,” Garble does its best using these techniques. v0.16 targets Go 1.26Β only. Daniel MartΓ­ Your go.mod Is Clean. Your Infrastructure Should Be Too β€” TimescaleDB extends Postgres for analytics on live data. No second database, no pipeline. Try forΒ free. Tiger Data (creators of TimescaleDB) sponsor libopenapi: OpenAPI Parser and Validation Library β€” Full support for Swagger and OpenAPI 3.0, 3.1, and 3.2. Designed specifically to handle β€œthe largest and most complex specifications you can thinkΒ of.” Princess Beef Heavy Industries, LLC Hedge: Adaptive Hedged Requests for Cutting Tail Latency β€” An http.RoundTripper that adaptively fires backup requests when the primary exceeds a per-host p90 latency estimate, with a token-bucket budget to prevent load amplification during outages. A practical take on Google’s The Tail atΒ Scale. Prathamesh Bhope gontainer: A Dependency Injection Container for Go β€” A small reflection-based DI container from NVIDIA with no dependencies or code generation. You register factory functions and let it wire up your services from their paramΒ types. NVIDIA Corporation 😬 Spank: Hit Your MacBook and It Yells Back… β€” A silly experiment using the accelerometer in modernΒ Macs. Tai Groot πŸ”“ piv-go 2.6 – Library for managing PIV keys and X.509 certs onΒ YubiKeys. go-huggingface 0.3.5 – Download files, models & tokenizers fromΒ HuggingFace. GitHub MCP Server 1.0 – GitHub's official MCP/API server is written inΒ Go. GoMLX 0.27.3 – Full-featured, accelerated cross-platform ML framework. πŸ€– yzma 1.12.0 – Integrate Go apps with llama.cpp for localΒ inference. forbidigo v2.3.1 – Go linter for forbidding specified identifiers inΒ code. go-git 5.18 – Extensible pure Go Git implementation library. πŸ“°Β Classifieds Skip the README archaeology. Flox delivers reproducible dev environments with no system pollution. One command, zero friction. Try itΒ free. Real-time search data for backend engineers who care about reliability andΒ scale. πŸ‘€ A Go..od Way to Read Hacker News? Circumflex 4.0: A Terminal-Based Hacker News Client β€” We first linked to this Bubble Tea-based terminal client for Hacker News in 2022, but it’s come a long way since. v4.0 adds a native comment section view and a built-in β€˜reader mode’ for linkedΒ items. Ben Sadeh