The Applied Go Weekly Newsletter logo

The Applied Go Weekly Newsletter

Subscribe
Archives
August 17, 2025

Commit Messages That Write Themselves

AppliedGoNewsletterHeader640.png

Your weekly source of Go news, tips, and projects

2025-08-17 Newsletter Badge.png

Commit Messages That Write Themselves

Hi ,

The final 🌴 Summer Break 🌴 issue is in your inbox! With this issue, I'm concluding the AI series and looking forward to sending regular issues again!

However, I'm on the edge of starting a new chapter in my professional life, and I want to ensure that the newsletter continues to serve you well. To see where the newsletter stands today, I would kindly ask you to take a quick survey (3 minutes) before August 24 and share your opinions, wishes, and expectations with me.

The Smallest Useful AI App in Go: Commit Messages That Write Themselves

This is the fourth Spotlight in the Go And AI miniseries.

So far, we built Go code to call an LLM via plain HTTP, created an MCP server in Go, and had a look at various AI projects (libraries & tools) written in Go. Now let's create something practical; something that every developer actually needs: a git commit message generator in about 110 lines of Go.

Setup

go mod init git-cmt 
go get github.com/tmc/langchaingo/llms
go get github.com/tmc/langchaingo/llms/anthropic

This initializes a new Go module and installs our two dependencies: LangChainGo for vendor-agnostic LLM integration, and its anthropic subpackage. LangChainGo lets you switch between OpenAI, Anthropic, Ollama, and other providers with a single line change.

This app reads your git diff and generates commit messages that adhere to the Conventional Commits specification (the one that suggests feat/fix/docs prefixes) using an LLM. No more staring at staged changes wondering how to describe them. And no more endless repetitions of "Fix stuff" or "Change stuff" messages in Git histories.

The Core Logic

The whole operation consists of just three steps:

  1. Read staged changes
  2. Let an LLM turn them into a commit message
  3. Let the user review and perform or abort the commit

First, we read staged changes by calling git diff:

func getStagedChanges() (string, error) {
    // git diff --cached -b gets staged changes, ignoring whitespace
    cmd := exec.Command("git", "diff", "--cached", "-b")
    output, err := cmd.Output()
    if err != nil {
        return "", fmt.Errorf("failed to get git diff: %w", err)
    }

    diff := string(output)
    if diff == "" {
        return "", fmt.Errorf("no staged changes found")
    }

    if len(diff) > 3072 {
        diff = diff[:3072] + "\n... (truncated)"
    }

    return diff, nil
}

The output, a diff of all staged changes, is enough to let the LLM conclude what happened at a technical level; it wouldn't be able to reason about the higher ideas or goals behind the changes, due to a lack of context. But in many situations, this level of commit messaging is enough, and when more context is required, the user can add it while reviewing the commit message.

To save tokens (and money), the diff output is capped at 3k characters. This might distort the generated commit message for larger changes, but I assume you keep your commits small and crisp.

Next, the code shall send the diff to an LLM with a prompt that requests a structured, JSON-formatted response:

type Commit struct {
    Type    string `json:"type"`    // feat, fix, docs, etc.
    Scope   string `json:"scope"`   // optional component
    Message string `json:"message"` // the actual description
}

func generateMessage(changes string) (Commit, error) {
    // Easily swap providers here by using another subpackage
    llm, err := anthropic.New(
        anthropic.WithModel("claude-3-5-haiku-latest"),
        anthropic.WithToken(os.Getenv("ANTHROPIC_API_KEY")),
    )
    if err != nil {
        return Commit{}, fmt.Errorf("failed to create LLM client: %w", err)
    }

    prompt := fmt.Sprintf(`You are a git commit message generator.
    Analyze changes and output JSON with:
    - type: feat|fix|docs|style|refactor|test|chore
    - scope: affected component (optional)
    - message: clear description (50 chars max)

    Changes:
    %s

    Return ONLY valid JSON, no other text.`, changes)

    resp, err := llms.GenerateFromSinglePrompt(
        context.Background(),
        llm,
        prompt,
    )
    if err != nil {
        return Commit{}, fmt.Errorf("LLM request failed: %w", err)
    }

    var commit Commit
    if err := json.Unmarshal([]byte(resp), &commit); err != nil {
        return Commit{}, fmt.Errorf("failed to parse JSON response: %w (raw response: %q)", err, resp)
    }

    return commit, nil
}

The Commit struct defines our expected output format, making the AI response type-safe. (Some APIs, such as OpenAI's Response API, even provide a parameter to force JSON-only output.)

LangChainGo's unified API allows you to swap anthropic.New() with openai.New() or ollama.New() (same for anthropic.With...()) without changing any other code.

Side note: Setting the model's temperature to 0 ensures consistent, focused outputs. That's great for testing; and frankly, for commit messages, we don't need creative writing.

(Fun fact: while experimenting with a few LLMs, I discovered that GPT-OSS requires a temperature of 1; any other temperature setting triggers an error message.)

Finally, we format and display the conventional commit. (I didn't bother factoring this part out into a separate function, so here it is along with the wiring-up of the execution flow in main()):

func main() {
    changes, err := getStagedChanges()
    if err != nil {
        log.Fatalf("Failed to get staged changes: %v", err)
    }

    log.Printf("Staged diff found; generating message for changes")

    commit, err := generateMessage(changes)
    if err != nil {
        log.Fatalf("Failed to generate commit message: %v", err)
    }

    log.Printf("Parsed commit: %+v", commit)

    output := commit.Type
    if commit.Scope != "" {
        output += "(" + commit.Scope + ")"
    }
    output += ": " + commit.Message

    cmd := exec.Command("git", "commit", "-e", "-m", output)
    cmd.Stdin = os.Stdin
    cmd.Stdout = os.Stdout
    cmd.Stderr = os.Stderr
    err = cmd.Run()
    if err != nil {
        log.Fatalf("Failed committing the changes: %s", err)
    }
}

The main function orchestrates everything: get changes, generate message, format output. It constructs the conventional commit format (type(scope): message) and displays it with a copy-paste ready command. The scope is optional; if the LLM doesn't detect a clear component, it omits the parentheses.

Now you have a new Git tool!

The name of the executable, git-cmt, was chossen intentionally. Git recognizes commands whose name starts with git- automatically as a subcommand, so if you can bring your muscle memory to type

git cmt

instead of git commit, you now have a new Git command for interactive commits with pre-filled commit message.

You can get the full code from github.com/appliedgocode/git-cmt. I kept it as concise as possible, so feel free to add more functionality, such as reading the API key from a secrets manager, or allowing the tool to continue if the LLM call errors out; it could then simply call git commit -e without a message preset.

Series Wrap-Up

This issue concludes the Go & AI mini series (but this doesn't mean I wont write about AI topics in the future).

While artificial general intelligence (AGI)—let alone artificial superintelligence (ASI)—isn't on the horizon and the evolution of current language model architectures seems to stagnate, we shouldn't ignore or belittle what LLMs can do for us right now. This and the three preceding newsletter issues have demonstrated the ease of integrating LLMs with algorithmic workflows to build a new kind of applications and tooling and make LLMs actually useful for our daily businesses.

Like other general-purpose technologies, the impact of AI is materialized not when methods and capabilities improve, but when those improvements are translated into applications and are diffused through productive sectors of the economy.

Arvind Narayanan and Sayash Kapoor: AI as Normal Technology

If you want to learn more about using AI with Go (or vice versa), Matt Boyle released a course a few days ago that takes you deeper into the frontier where hard-coded logic meets neural intuition.

The best part: With the coupon code APPLIEDGO40, you get 40% off the course price and help me a little with keeping this Newsletter (and the blog) up and running. Get more information about the course here (affiliate link)

Tame the artificial neurons!

Happy coding! ʕ◔ϖ◔ʔ

Questions or feedback? Drop me a line. I'd love to hear from you.

Best from Munich, Christoph

Not a subscriber yet?

If you read this newsletter issue online, or if someone forwarded the newsletter to you, subscribe for regular updates to get every new issue earlier than the online version, and more reliable than an occasional forwarding. 

Find the subscription form at the end of this page.

How I can help

If you're looking for more useful content around Go, here are some ways I can help you become a better Gopher (or a Gopher at all):

On AppliedGo.net, I blog about Go projects, algorithms and data structures in Go, and other fun stuff.

Or visit the AppliedGo.com blog and learn about language specifics, Go updates, and programming-related stuff. 

My AppliedGo YouTube channel hosts quick tip and crash course videos that help you get more productive and creative with Go.

Enroll in my Go course for developers that stands out for its intense use of animated graphics for explaining abstract concepts in an intuitive way. Numerous short and concise lectures allow you to schedule your learning flow as you like.

Check it out.


Christoph Berger IT Products and Services
Dachauer Straße 29
Bergkirchen
Germany

Don't miss what's next. Subscribe to The Applied Go Weekly Newsletter:
LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.