Happy New Year (With Robotic Greetings) • The Applied Go Weekly Newsletter 2026-01-01

Your weekly source of Go news, tips, and projects

Happy New Year (With Robotic Greetings)
Hi ,
I know, I know, this newsletter is on winter break until Jan 11th! But I thought, why not send a new year's greeting, along with something I planned to compile for quite a while: a list of AI-related tooling and libraries for Go. While vibe coders are busy trying to prove that programming languages are becoming irrelevant for software development, my take is that the contrary is true: A solid, easy-to-learn, manageable and versatile programming language is still the foundation of good code, with or without AI's help. So whenever I explore anything AI-related, I do so with my Go glasses on. Go and AI can be a great team, and this issue's Spotlight aims to prove this.
Happy New Year, and happy coding in 2026!
–Christoph
Spotlight: Ai and Go
What's your next Go project—at home or at work? Will it contain AI-powered functionality? Do you plan to use AI to build the project? Or do you want to explore AI with the language you love? I'm not here to provide you with the ultimate learning path, but a list of cool Go/AI projects to rummage through isn't bad either, right?
Terminal-Based AI Coding Assistants
charmbracelet/crush
How could I not start this list with Crush, a glamorous terminal-based AI coding agent that integrates multiple LLMs (OpenAI, Anthropic, Groq, Gemini) with your codebase, featuring LSP integration, session management, and extensibility via MCPs for enhanced code generation and debugging workflows directly in the terminal. Made by Charm_, who also made Bubble Tea, Lipgloss, and more cool TUI packages. (I'm testing Crush extensively as time allows, and I already have fixed and extended some code bases and even have Crush/Claude write a complete CLI tool that only had one (obvious) bug to fix manually. Time saver? Dunno. Fun? Definitely.)
LLM Framework
langchaingo
The LangChain framework is a "Lego kit" for integrating LLMs into apps. LangChainGo is the Go implementation of LangChain that enables Go developers to build LLM-powered applications with composable components including chains, agents, tools, embeddings, and integrations with local and remote LLM providers.
Local LLM Runtimes and Self-Hosted AI Platforms
ollama/ollama
If you want to try an LLM on your local machine, you'd probably pick Ollama or LMStudio. Ollama is an open-source tool written in Go that runs large language models locally on consumer hardware with a simple API, enabling local AI use without cloud dependencies. Very small models don't even require a GPU, so you could run Ollama on a small VPS.
mudler/LocalAI
Similar to Ollama, LocalAI is a Go-based, open-source, drop-in replacement for OpenAI APIs, to run LLMs, image generation, audio, and agents locally on consumer hardware without GPUs. Unlike Ollama, LocalAI is part of an integrated suite of tools including LocalAGI and LocalRecall (see below).
Local Memory and Vector Databases
philippgille/chromem-go
LLMs are forgetful and context windows have limitations. Retrieval-Augmented Generation (RAG) uses vector databases to store information in an AI-consumable format to overcome forgetfulness. Typically, those databases are built as standalone servers. chromem-go is an embeddable vector database for Go with zero third-party dependencies that Go developers can integrate directly into their applications to add RAG and embeddings-based features with in-memory storage and optional persistence, eliminating the need for separate database servers. Simplify your operational architecture!
mudler/LocalRecall
LocalRecall is a lightweight, fully local RESTful API that Go developers can integrate to provide their AI agents with persistent memory and knowledge base capabilities through vector database storage and semantic search, operating entirely offline without GPUs or cloud services.
AI Agent Frameworks
deepnoodle-ai/dive
I love the company's name! Deep Noodle derive their name from the term "noodling about something", and doesn't everyone love deep noodling? With Dive, Deep Noodle created an AI toolkit for Go that enables developers to create specialized AI agents, automate workflows, and quickly integrate with leading LLMs, though specific implementation details were not available in the provided context.
Deep Noodle is also known for Risor, an embedded scripting language for Go.
Protocol-Lattice/go-agent
Protocol-Lattice/go-agent is a production-ready Go framework that enables developers to build high-performance AI agents with graph-aware memory, UTCP-native tools, multi-agent orchestration, and optimized performance features including 10-50x faster operations and sub-millisecond LRU caching. Lattice promises to do the "orchestration plumbing" while you can focus on the interesting stuff a.k.a. domain logic.
Oh, and BTW, "UTCP" stands for Universal Tool Calling Protocol and is like MCP but without the overhead.
mudler/LocalAGI
LocalAGI is not about Artificial General Intelligence (AGI), which is still beyond the horizon. The "AG" in LocalAGI seems to stand for "Agent": LocalAGI is a self-hostable AI agent platform built with Go that Go developers can use to create customizable AI assistants, automations, and multi-agent teams with persistent memory, integrations (Discord, Slack, GitHub), and OpenAI-compatible REST APIs—all running locally without cloud dependencies.
Model Context Protocol (MCP) Tools
modelcontextprotocol/go-sdk
If you want to write MCP servers or clients in Go, there is almost no way around the official Go SDK for Model Context Protocol. Well, "almost" becuase the README explicitly acknowledges its predecessors and idea sources mcp-go, mcp-golang, and go-mcp, which continue to be useful projects on their own.
modelcontextprotocol/registry
The MCP Registry is the official, community-driven MCP server repository, built in Go with PostgreSQL. Think "App store for MCP servers."
mark3labs/mcphost
From the maker of mcp-go mentioned in the previous entry (go-sdk), MCPHost is a CLI application that connects LLMs (Claude, Ollama, OpenAI) to external tools via the Model Context Protocol.
Machine Learning Frameworks
Now we're entering the fields of ML research and applications. I don't know who of you still knows Gorgonia, a machine-learning library along the lines of Theano and TensorFlow and one of the earliest (if not the earliest) ML library for Go. Unfortunately, development has stalled in 2023. But that's not the end of ML in Go; au contraire, mon ami! With GoMLX and Hugot, two ML libraries set out to move Go deeper into the ML space.
GoMLX
GoMLX positions itself as "PyTorch/Jax/TensorFlow for Go", which is pretty similar to Gorgonia's positioning. For extra fun (or profit), GoMLX even runs in the browser or on embedded devices.
Hugot
Hugot is a library for integrating Huggingface transformers into Go applications. Hugot can download models in ONNX format from Huggingface and integrate them in a Go application for running interference, optionally with a pure Go backend for minimal runtime size and avoiding C compilation hassles. (I almost typed "C complication"—a Freudian typo?)
In an AI application architecture, Hugo sits one level above GoMLX, making the two libraries a perfect duo for ML in Go—see also the following article:
GoMLX and Hugot - Knights Analytics
Forget the "language divide" (Python for ML research, and a compiled language for deployment). With GoMLX and Hugot, a single language—Go—can cover the whole pipeline.
Conclusion
While the list is anything but complete, it should cover enough AI topics to make you curious to dig deeper, if you weren't already. I hope I can declare a "mission completed!"
Completely unrelated to Go
How Rob Pike got spammed with an AI slop “act of kindness”
AI can be useful but it is also abused. Here is a story on how a non-profit org's well-meant "thank you" offensive made some people, including Rob Pik, quite irate.
Prompt caching: 10x cheaper LLM tokens, but how? | ngrok blog
If you want to dig into using LLMs with Go, this article is a must. The title is a misnomer: While the article eventually talks about prompt caching, it first takes you through all aspects of LLMs, in the clear and pragmatic style I love Sam Rose's articles for. (Sam who?)
