Article

Why We're Building Nanocoder: A Local-First Coding Agent for the Long Haul

The terminal-based AI coding assistant space is moving fast. Tools like Claude Code, Gemini CLI, and various open-source alternatives have made "agentic coding" - where an LLM autonomously reads, writes, and executes code - accessible to developers everywhere.

But as the ecosystem matures, a pattern has emerged that concerns us: many popular tools are built on foundations that prioritise rapid growth over long-term stability. Exploit-based provider access, venture-backed pivots, and pseudo-open-source governance create real risks for developers who build their workflows around these tools.

Nanocoder is our answer. It's a local-first, open-source coding assistant CLI designed to be boring in all the right ways: stable, private, and built on legitimate protocols that won't disappear overnight.

We're not claiming Nanocoder is the most feature-complete option today (although we're adding features fast!). We're building it to be the most trustworthy one.


The Problem with "Move Fast and Break Things"

Let's talk about what happened with OpenCode.

OpenCode grew rapidly by enabling users to route their Claude Pro subscriptions through autonomous agent loops - usage patterns that would normally cost thousands in API fees. This worked through header spoofing that made requests appear to come from official clients.

On January 9, 2026, Anthropic implemented server-side checks that blocked this traffic. Accounts were reviewed. Workflows collapsed overnight.

You can debate whether Anthropic's response was heavy-handed. But the underlying point stands: if your productivity depends on an exploit, you're one patch away from starting over.

Shortly after the block, OpenCode pivoted to "OpenCode Black" - a $200/month premium tier. Users who'd chosen the tool for its open-source ethos found themselves facing a familiar choice: pay up or find something else.

This isn't a critique of OpenCode's team specifically. It's what happens when venture-backed projects need to show returns. The incentives push toward monetisation, and "free and open" becomes "free until we need revenue."


What We're Doing Differently

Nanocoder is governed by the Nano Collective - a group of developers building open-source AI tools without venture funding. This isn't ideological purity for its own sake. It's a practical decision about incentives.

MIT Licensed, genuinely. Our code is yours to fork, modify, and build on. No contributor license agreements that let us relicense later. No "open core" where the useful bits are proprietary.

Free, sustainably. We don't have investors expecting a 10x return. Nanocoder is free and will stay that way. We fund development through donations and consulting work, not by eventually converting users into subscribers.

Building toward independence. Nanocoder works with cloud providers today, but our goal is to make them optional. We're actively developing architectures that help smaller, local models perform better at coding tasks. The endgame isn't "use Claude through a nicer interface" - it's a world where you can run a capable coding agent entirely on your own hardware, free from subscription fees, API limits, and policy changes you didn't agree to.


Local-First by Design

If you're looking for a Claude Code alternative or terminal AI coding tool that respects your privacy, this matters: Nanocoder is designed to run locally.

Your code stays on your machine. Your prompts stay on your machine. If you're running a local model through Ollama or LM Studio, even the inference happens locally.

This isn't just about privacy (though that matters). It's about performance. A lightweight harness means more resources for your local LLM. We've built Nanocoder with the Ink framework specifically to keep CPU overhead minimal - your model gets the cycles it needs.

Compare this to reports of some terminal AI tools consuming 30-50% CPU while idle. When you're running a local model, that overhead translates directly into slower responses and degraded output quality.


What Nanocoder Does Well Today

We're at 935 stars and 64 releases. Here's what's working:

A beautiful, customisable CLI. Themes, custom commands, flexible configuration - Nanocoder looks good in your terminal and adapts to how you work. We've built it with the Ink framework for a responsive, low-overhead experience.

Universal provider support. Any OpenAI-compatible API works out of the box. Ollama, LM Studio, llama.cpp, vLLM, LocalAI for local inference. OpenRouter, OpenAI, or any cloud provider with a compatible endpoint. Switch providers with /provider, and Nanocoder remembers your preferred model for each one.

Three development modes. Toggle between Normal (review each tool call), Auto-accept (faster workflows), and Plan mode (AI suggests but doesn't execute - useful for exploration). Switch with Shift+Tab.

MCP integration. Full support for the Model Context Protocol with stdio, HTTP, and WebSocket transports. Connect to filesystem servers, GitHub, Brave Search, or any MCP-compatible tool. The /mcp command shows connected servers and available tools.

Built-in tools that matter. File operations, bash command execution with !command syntax (output becomes context for the LLM), and @file fuzzy search to include file contents in messages. A /usage command shows context consumption visually.

Custom commands as Markdown. Define reusable prompts in .nanocoder/commands/ with YAML frontmatter for parameters, descriptions, and aliases. Template variables with {{parameter}} syntax. Organise by namespace through directories. Ships with /test, /review, /refactor:dry, and /refactor:solid out of the box.

Project-aware configuration. agents.config.json can live at project level (for team sharing) or user level. Same for preferences - Nanocoder remembers your choices across sessions.

VS Code extension. Run with --vscode for live diff previews of file changes directly in your editor before approving.

Cross-platform. Works on Linux, macOS, and Windows. Install via npm, Homebrew, or Nix Flakes.

For full documentation: github.com/Nano-Collective/nanocoder


What We're Working On

We're actively developing on two fronts.

First, we're closing gaps with established coding agents - the features developers expect as table stakes. Context management, tool reliability, edge case handling.

Second, we're exploring architectures designed specifically to help smaller models perform better at coding tasks. This matters if you care about local-first development. A 7B parameter model running on your laptop will never match a cloud-hosted frontier model in raw capability. But with the right scaffolding - better context management, smarter tool orchestration, more structured reasoning loops - you can close that gap significantly.

That's the work that excites us: not just wrapping API calls in a nice interface, but rethinking how the harness itself can augment model performance. We're experimenting with different approaches and will share what we learn.


The Philosophy Behind the Tool

The agentic coding space will consolidate. Some tools will become the "walled gardens" backed by major players. Others will flame out when funding dries up or exploits get patched.

We're betting there's room for a third path: community-owned tools built on stable foundations, developed transparently, and designed to last.

Nanocoder isn't trying to be the flashiest option. We're trying to be the one that's still working reliably in two years, without having pivoted to a subscription model or gotten acquired.

If that resonates with you, we'd love your help - whether that's using Nanocoder, contributing code, or just telling us what's broken.


Get Started

npm install -g @nanocollective/nanocoder
nanocoder

Or via Homebrew:

brew tap nano-collective/nanocoder https://github.com/Nano-Collective/nanocoder
brew install nanocoder

Check out the GitHub repo, join the Discord, or just start using it and tell us what's broken.

Built by the community, for the community.

Want to join the discussion? Head over to GitHub to share your thoughts!

View Discussion on GitHub