Little Helper • The Applied Go Weekly Newsletter 2026-05-03

Your weekly source of Go news, tips, and projects

Little Helper
Hi ,
Black-and-white drawings can have a special charm, but drawing the world as black or white loses all the shades in between. The ongoing discussions about LLMs strike me as a battle of painters, the ones equipped with deep black paint, the others with shining white. "LLMs kill our jobs!" versus "We vibe-code everything!" Time to point a Spotlight at the nuances.
But first, have some closer looks at Swiss hash tables, lock-free event loops, and non-invasive heap profiling.
Enjoy!
–Christoph
Featured articles
Swissing a table
A Swiss table is similar to other tables (evidence in the article). A Swiss hash table, though, is different. But what exactly makes it different, and what advantages does it offer over simpler hash algorithms? Philip Earl investigates.
How Event Loops Work: Building One in Go from Scratch
Goroutines? Who needs goroutines?
Here's a single-threaded TCP server doing non-blocking I/O, similar to how Redis manages connections. According to the author, this is "just a fancy way of saying I avoided the standard library to make it harder for myself."
Zero-config Go heap profiling | Coroot Blog
How to add continuous heap profiling:
- [ ] Code changes
- [ ]
pprofendpoints - [ ] annotations
- [x] none of the above
Podcast corner
083: Lisette, inspired by Rust, compiles to Go with Iván Ovejero
If there's a language that has some niceties like pattern matching in the type system, and that compiles to Go, would you switch?
Lisette is such a language, and its inventor Iván Ovejero joins Dominic St-Pierre in this episode to talk about his language, Go, and programming languages in general.
OpenAPI 3.1.0 support in kin-openapi, and a CRITical look at agentic coding
Quobix of Princess Beef Heavy Industries, a.k.a Tomasz Tomcyk is this episode's guest. He, Jonathan, and Shay discuss CRIT.md, a tool (written in Go, in case you wonder) for interactively reviewing a coding agent's work. Plus much more; hit play to find out.
Spotlight: Little Helper: How AI can make coding more fun (instead of eliminating it)
The software developer community appears to be split into two camps, divided by an insurmountable rift. One camp fully embraces large language models and turn from hand-coding to orchestring LLM agents. The other camp rejects LLMs (and other AI perhaps, too), for they take away the joyfulness that comes with exploring codebases, designing data structures and implementing algorithms.
I'm not here for taking sides. Rather, I want to change the perspectice and focus on tasks that LLMs can do for developers wihout generating code.
The image I'll be painting below isn't far away from "Little Helper," a tiny humanoid robot with an incandescent light bulb for a head that assisted Gyro Gearloose, an ingenious inventor from the Donald Duck universe. Little Helper doesn't take the act of inventing away from Gyro, but it frequently helps Gryo out of a jam.
LLMs can become your Little Helpers.
A practical example
Let me start with a recent session I ran for checking some code for security issues. Here's the prompt, simple and straightforward:
run govulncheck --help and gosec --help and run
a thorough security analysis on this repo.
Write your findings and recommendations to
security-check-2026-04-05.md
To which Claude responded:
Done. Here's a summary of what the analysis found:
govulncheck — clean: no known CVEs in any reachable symbols.
gosec — 70 issues, 3 HIGH:
┌─────┬─────────────────────────────────────────────┬──────────────────┬────────────────────┐
│ # │ Finding │ File │ Real Risk │
├─────┼─────────────────────────────────────────────┼──────────────────┼────────────────────┤
│ 1 │ Path traversal (G703) via symlink in │ generate.go:359 │ Low but fixable │
│ │ filepath.WalkDir callback │ │ │
├─────┼─────────────────────────────────────────────┼──────────────────┼────────────────────┤
│ 2 │ Symlink TOCTOU (G122) in same Walk callback │ generate.go:355 │ Low (trusted path) │
├─────┼─────────────────────────────────────────────┼──────────────────┼────────────────────┤
│ 3 │ Goroutine uses context.Background (G118) │ web/update.go:27 │ Intentional — │
│ │ while request context available │ │ needs #nosec │
└─────┴─────────────────────────────────────────────┴──────────────────┴────────────────────┘
Noteworthy MEDIUM issues:
- Session cookie missing Secure: true (web/auth.go:91) — trivial one-line fix
- No ReadHeaderTimeout on http.Server — Slowloris mitigation, one-line fix
- 31 unhandled tx.Rollback() errors — cosmetic, use _ = tx.Rollback()
staticcheck — no security issues; 46 code quality items (unused
code, capitalized error strings).
Top 3 actionable fixes in priority order:
1. Add Secure: true to session cookie in news/web/auth.go:91
2. Add ReadHeaderTimeout: 5 * time.Second to http.Server in news/web/server.go
3. Add path-escape validation after filepath.Rel() in news/generate.go:350
✻ Churned for 4m 58s
So that's not much more info than from running those tools myself. I could have stressed more on "do your own vulnerability checks", but I got curious about the third gosec issue. "Goroutine uses context" isn't exactly unusual, so I asked Claude to explain this one.
Referring to G118 — Goroutine uses context.Background while
request context is available in news/web/update.go:27:
go func() {
updateCtx, updateCancel := context.WithTimeout(context.Background(), s.cfg.UpdateTimeout)
defer updateCancel()
_ = s.ops.RunUpdate(updateCtx, jobRunID)
}()
The HTTP handler receives a request, kicks off the update
pipeline in a background goroutine, then immediately
redirects the browser to a status page. The goroutine
outlives the HTTP request by design.
If the code used the request's context instead:
updateCtx, updateCancel := context.WithTimeout(r.Context(), s.cfg.UpdateTimeout)
…then the update would be cancelled the moment the HTTP
response is sent (which happens almost immediately,
before the redirect even reaches the browser). The
entire Reddit fetch + enrichment + SQLite write pipeline
would be aborted seconds after it started.
Using context.Background() as the parent means the
update runs to completion (or until s.cfg.UpdateTimeout
fires) regardless of whether the client is still
connected. That is exactly the intended behaviour — the
user kicks off the job, gets redirected to a progress
page, and the job runs independently in the background.
gosec flags this as a policy warning because detaching
from the request context is a common source of bugs
(leaked goroutines, missing cancellation), so it asks
you to make the choice explicit. In this case the choice
is correct; the fix is just to add a //nolint:contextcheck
or #nosec G118 comment with a short explanation so the
intent is documented and future readers (and tools)
don't re-flag it.
Claude correctly inferred that my code works as designed and suggested to exclude the code from further gosec urns (but only for the particular issue G118).
This was just Claude 4.6 and some superficial prompting. Newer models like Opus 4.7 or GPT 5.5 are said to be fierce security hole detectors; you may heard of Claude Mythos finding 271 zero-day security holes in Firefox.
Reviews like these are such a great use case for LLMs because LLMs excel at reading and interpreting information while they aren't designed for producing genuinely new output. So instead of having them produce code, you can get a lot usefulness out of them by asking them to review code.
And that's what I'll start the list of what LLMs can do for developers with:
What LLMs can do for developers
Review your code
LLMs are relentless reviewers, never tired, never too bored to do another review. Have them inspect your code for -
- vulnerabilities (see above)
- performance smells
- logical bugs (that the compiler can't catch)
- subtle edge cases that might surface three months after production rollout
- opportunities for refactoring
In fact, anything you can ask a teammate about your code you can ask an LLM. I'm not saying that the quality of answers is always comparable, but there will be times when your teammates are hard to catch and a quick LLM answer is ways better than an answer days later or no answer at all.
Explore unfamiliar codebases
What's more daunting than having to become familiar with an unknown codebase. An LLM can dissect the architecture, walk you thorough the code, explain how the parts work together, and guide you to the parts relevant to your task. My favorite approach to this is Simon Willison's linear walkthrough.
Besides a full walkthrough, you can also focus on a specific package, function, or code line and pester the LLM with your questions. No question too dumb to ask!
Be your debugging rubber duck
Do you know these types of bug that seem to have no logical explanation? The ones where you exhausted all hypotheses about possible root causes?
The classic approach to escape this pit is a rubber duck. Yes, a phyiscal rubber duck sitting at your desk. When you start talking to it about the bug, you are able to sort your thoughts more clearly than just be silent thinking. As a side effect, talking to a rubber duck may look silly but it's probably looking less silly than talking to yourself.
An LLM is your next-level rubber ducky. Unlike a real rubber duck, an LLM can actually respond to you and help you sort your stream of thoughts and your myriads of hypotheses about why that damned bug doesn't stop playing whack-a-mole with you.
Write boring boilerplate and scaffolding
Ok, I promised to write about how LLMs can help you without taking the coding part away from you, but every coding task includes parts that can bore anyone to death: writing boilerplate code, over and over again. Sometimes you can find a smart way to factor out boilerplate code, but if you're in a hurry, you might want to skip the R&D part and just have an LLM produce the inevitable boilerplate code for you.
Or if the boilerplate follows a deterministic pattern with a few parameters that can vary, why not ask the LLM to write a tool that generates this kind of boilerplate?
I do this quite often: Ask an LLM to write a tool for a task, rather than repeatedly asking the LLM to do the task. This approach saves tokens and ensures deterministic output on each run.
Generate function and package documentation
Writing documentation is not everyone's cup of tea. Summarizing and describing text (including code) is a core competency of LLMs (as opposed to arguing logically; a trait that LLMs can only simulate). So: lazy dev & chatty LLM = match made in heaven.
Write unit test stubs and extend tests
Even if you don't want to give control out of your hands for testing, an LLM can at least set up test stubs that you only have to complete. Likewise, an LLM can suggest additional tests to cover edge cases you might have overlooked.
Deal with environment configuration
Whether it's a Dockerfile, a Docker Compose file or a Kubernetes configuration, LLMs, can not only take over generating them based on your project structure and tool stack but also help troubleshooting a broken configuration.
Whether a tool is good or bad depends on the usage
AI slop is real, and LLM-generated code may accumulate tech debt in the long run. Yet, a knife can't hurt you if you know how to use it properly and safely. LLMs can be used in many good and even more bad ways, but it's never a good idea to go to extremes: Neither vibe-coding (in the original definition: Letting an LLM write code and never look at it) nor total rejection of LLM-assistance is a good response to the new reality that LLMs have created.
Instead, use LLMs as your Little Helpers: handy when you need them, always in service of your craft but never in place of it.
More articles, videos, talks
HAL by any other name
Chatting with an LLM through typing and reading text is so yesteryear. Zarl runs the full loop: Speech-to-text -> LLM processing -> tools calls -> LLM output -> text-to-speech -> avatar animation, rinse, repeat. And everything is inside a Go binary. Repo here.
It's almost like a HAL 9000 from Kubrick's "2001: A Space Odyssey," with a significant exception: Good ol' HAL lacks an avatar. Technology wasn't ready for that in 2001!
Who's on call? How Claude helped us calculate this 2,500x faster | Blog | incident.io
The folks at Incident.io wanted to optimize a slow scheduling algorithm. Now the point about scheduling is that it's darn complex, and "optimizing" isn't just switching to a better sort algorithm and peephole-optimizing some nested loops. After a couple of tedious steps, the team invited a chap called Claude, who seems to got the knack of cracking linear programming problems.
OpenTelemetry-Native Logging in Go with the Slog Bridge · Dash0
Go's slog package can produce structured log output but it doesn't present itself as a source to OpenTelemetry (which is totally expected, as packages in the standard library aren't concerned with bridging to specific 3rd-party systems). That's what bridges like otelslog are for. This article gets you up and running with using otelslog in your projects.
Hoisting wire plumbing out of your Go handlers
If handler code contains 80% transport-specific boilerplate, it's time to stow those 80% away behind a generic adapter.
Projects
Libraries
KeibiSoft/KeibiDrop: KeibiDrop is a tool that allows any two computers to transfer files and folders via a secure virtual mountpoint
Think "Syncthing but with FUSE-mounted folders" or "Wetransfer but with end-to-end encryption" or "Localsend but works over the internet."
benedoc-inc/pdfer: A pure Go library for PDF processing with comprehensive XFA (XML Forms Architecture) support
PDF libraries are typically built to either compose PDF documents or extract content from them. This one was made for filling out forms in a PDF, but it can also merge, split, and reorder pages, convert between PDF/A formats, diff two PDFs, and more.
overthinker1127/cgo-gen: a Rust CLI that generates Go cgo wrappers from C/C++ headers
I couldn't find out why it's written in Rust (except probably a personal preference by the author), but if you need to generate Go bindings for C/C++ libraries, this is the tool for you.
jeremiah-masters/dlht: High-performance, lock-free concurrent hash table in Go, based on DLHT, with cooperative resizing and cache-efficient buckets.
Yes, it's fast. But that's not all: This hash table implementation is formally verified using Quint, and the author also used the verification language to build and verify an extension to the DLHT specification that adds a delete operation.
Tools and applications
janosmiko/lfk: ⚡ LFK is a lightning-fast, keyboard-focused, yazi-inspired terminal user interface for navigating and managing Kubernetes clusters. Built for speed and efficiency, it brings a three-column Miller columns layout with an owner-based resource hierarchy to your terminal.
"yazi-inspired?" Well, if you don't know yazi (I didn't, but I brew-installed it in an instant), it's a blazingly fast file manager TUI app written in a language known for blazingly fast fastness. LFK is only lightning-fast 😭 but comes with an array of features that some k8s admins might yearn for.
itchyny/go2bf: Compile Go to Brainfuck!
This had to happen eventually: A transpiler from Go to Brainfuck, a language consisting of only eight commands (and this fact alone implies that Brainfuck source code is totally unreadable). Impress your friends with working Brainfuck apps!
(I just hope Spamhaus doesn't block this issue because of inappropriate language.)
xdagiz/xytz: A Beautiful YouTube Downloader/Player TUI
A few more of these TUI apps, and we won't need a GUI anymore. The only thing xytz doesn't do is to play videos in the terminal. (But it can start mvp.)
Declarative git repo sync/migration tool and self hosted code search engine
Don't put all eggs into one basket, and don't put all your valuable repos on a single provider's servers. miroir (French for "mirror"—yes, I know, you guessed it already) replicates repositories between git hosting services. Note that it's a one-way replication: One repo is the source of truth, its replicas reflect every change, even to meta information (public/private visibility, archived status, and descriptions). Stick with the author's cautionary advice.
conorarmstrong/zx_go: ZX Spectrum emulator written in Go
Well, I don't need this one. I still have a real ZX Spectrum around! (But, TBH, I haven't switched it on for maybe three decades. If I do, and some aged electrolytic capacitor explodes, I'd might come back here)
Reading the README, however, makes me think I could get much more out of this simulator than my old ZX Spectrum hardware, joy-wise.
karlpip/akustik: Multiroom audio system for playing files and streaming content. - Codeberg.org
Stream music from your local music library and remote services (Tidal at the moment) to multiple speakers simultaneously (but not synchronously).
ani03sha/kv-fabric: Distributed KV store with tunable consistency, MVCC, replication lag measurement, and the phantom durability trap reproduced and documented.
Which consistency model does this K/V store implement? All of them. kv-fabric users can run benchmarks to determine the optimal consistency model for their specific situation.
GitHub - Balaji01-4D/pgxcli: Postgres CLI with autocompletion and syntax highlighting · GitHub
pgxcli is inspired by pgcli but focuses on speed and simple setup.
jamiefdhurst/journal: A simple blog available as an API, written in Go with accompanying tests and documentation. Can be used as a useful starting point for Go web projects.
Why don't you blog already? Maybe because many blog system are overkill. Meet journal, a low-level entry to blogging.
Sambigeara/pollen: Distributed WASM runtime. Workloads place themselves over a zero-trust mesh. One static binary.
WebAssembly has escaped the Web since standalone WASM runtimes entered the scene. Pollen takes the idea of a unified runtime further by creating a distributed "blob of generic compute."
soub4i/gh-relay: Share a private GitHub repo with anyone, no collaborator invite, no paid seat, no cleanup.
Needless to say you should trust the persons you're sharing your private repo with.
[snyssfx/go-prettify-mode.el: Emacs minor mode for hide if err != nil in Go programs like in Goland.
- Codeberg.org](https://codeberg.org/snyssfx/go-prettify-mode.el)
While proper error handling is crucial for robust code, many devs prefer to see the happy path only. Goland users enjoy a collapsed view of error branches since quite a while, now the "optimist's tools has arrived at Emacs.
Completely unrelated to Go
AI now writes 97% of my code. Here's what I learned
After having subscribed to Go at March 1st, I'm at a similar percentage of AI-generated code as Swizec. Maybe not the same total amount of code, though. And I'm definitely not generating code for a high-stakes project.
But now I feel more confident with using simple prompts, and less guilty about not having accumulated a sophisticated set of skills and memories and whatnot.
But perhaps the LLM is good without all that stuff because I let it code in a straightforward, low-abstraction language with lots of sample code available.
If it's in JSON, it's not really a configuration file
This article made me think about changing my old start package (my first-ever published Go project, BTW) to add support for the CUE language.
Thoughts on WebAssembly as a stack machine
WebAssembly is an interesting target for many kinds of Go projects. Eli Bendersky got curious about the grade of "stack machinenness" of the WebAssembly VM. Spoiler: it's not a pure stack machine.
