What It Does
OpenCode is an open-source, MIT-licensed AI coding agent that operates primarily through a terminal user interface (TUI), with beta desktop apps for macOS/Windows/Linux and IDE extensions for VS Code, Cursor, JetBrains, Zed, Neovim, and Emacs. It connects to 75+ LLM providers (Anthropic, OpenAI, Google, local models via Ollama, etc.) through the Vercel AI SDK and the Models.dev registry, allowing developers to choose or switch providers without changing tools.
Built by Anomaly Innovations (the SST/Serverless Stack team), OpenCode launched in June 2025 and has grown rapidly to 120K+ GitHub stars. It features two built-in agents (“build” for full-access development, “plan” for read-only analysis), Language Server Protocol (LSP) integration for richer code context, multi-session support, and session sharing. The project uses a monorepo structure built with TypeScript, Bun/Node.js, and Turbo.
Key Features
- Multi-provider LLM support via Vercel AI SDK and Models.dev registry (Anthropic, OpenAI, Google, xAI, DeepSeek, Mistral, local Ollama models)
- Two built-in agents: “build” (full-access) and “plan” (read-only analysis)
- LSP integration for automatic language server loading (Rust, Swift, Terraform, TypeScript, etc.)
- Model Context Protocol (MCP) support for extending capabilities through external tools
- Multi-session support: run multiple agents simultaneously on the same project
- Session sharing via generated links for reference and debugging
- Client/server architecture enabling remote usage scenarios
- Authentication via GitHub Copilot or ChatGPT Plus/Pro credentials
- Desktop app (beta) for macOS, Windows, Linux
- IDE extensions for VS Code, Cursor, JetBrains, Zed, Neovim, Emacs via Agent Client Protocol (ACP)
Use Cases
- Provider-agnostic coding assistance: Teams that want to switch between LLM providers based on cost, performance, or task type without changing their tooling
- Air-gapped / regulated environments: Organizations in healthcare, defense, or fintech that need local-only LLM usage via Ollama (requires careful configuration to disable external calls)
- Multi-model workflows: Developers who want to use different models for different tasks (e.g., a cheaper model for planning, a premium model for complex code generation)
- Terminal-centric workflows: Developers who prefer TUI-based tools and want a richer alternative to simple CLI agents
Adoption Level Analysis
Small teams (<20 engineers): Fits well. Free and open-source with straightforward installation via Homebrew or npm. BYOM (bring your own model) means no additional vendor cost beyond LLM API usage. However, the TUI has been criticized as complex and buggy, which may frustrate less technical users. RAM consumption (1GB+ reported) is notable for a terminal app.
Medium orgs (20-200 engineers): Reasonable fit with caveats. Multi-session and session sharing features support team workflows. The OpenCode Zen pay-as-you-go gateway simplifies model access management. However, the rapid release cadence with frequent regressions, documented telemetry concerns, and security issues (potential RCE via provider-based configuration injection) require careful evaluation for production use. No enterprise governance features documented.
Enterprise (200+ engineers): Does not fit well today. No enterprise-grade access controls, audit logging, or compliance features documented. The privacy claims have been challenged by the community (undisclosed external API calls). The rapid, stability-sacrificing release cadence is a liability for enterprise environments. The security posture (permissive by default, web-based config pulling) is concerning for regulated environments. Teams needing enterprise governance should evaluate Warp Oz or wait for OpenCode to mature.
Alternatives
| Alternative | Key Difference | Prefer when… |
|---|---|---|
| Claude Code (Anthropic) | Tightly optimized for Claude models, proprietary | You are all-in on Anthropic and want the most polished Claude experience |
| Aider | Git-native with auto-commit, Python-based, most mature | Git workflow integration is critical and you want battle-tested stability |
| Codex (OpenAI) | Rust-based, 80MB RAM footprint, locked to OpenAI | You want minimal resource usage and are committed to OpenAI models |
| Goose (Block) | MCP-native, donated to AAIF for neutral governance | You want community-governed open source with MCP-first architecture |
| Cline | VS Code-native, multi-provider | You prefer IDE-first rather than terminal-first workflow |
Evidence & Sources
- InfoQ: OpenCode Coding Agent (Feb 2026) — independent coverage by Sergio De Simone
- Hacker News Discussion — candid community feedback including privacy concerns, bug reports, and comparisons
- DEV Community: OpenCode vs Claude Code vs Aider — independent comparison
- Morph LLM: We Tested 15 AI Coding Agents — independent benchmark (note: OpenCode is a harness, so performance depends on underlying model)
- Tembo: 2026 Guide to Coding CLI Tools — 15-tool comparison guide
- OpenCode GitHub Repository — source code, 136K+ stars
Notes & Caveats
- Privacy concerns are documented and substantive. Hacker News users discovered OpenCode sends prompts to external services for session title generation even when configured with local models. A fork (RolandCode) was created specifically to remove telemetry. The “privacy-first” marketing does not match the default behavior.
- High resource consumption. Multiple reports of 1GB+ RAM usage for a TUI application. Contrast with Codex (Rust) at 80MB.
- Rapid, destabilizing release cadence. The creator acknowledged shipping “prototype features that probably weren’t worth shipping.” Features are frequently added, removed, and broken. This is typical of a young project but makes it risky for production workflows.
- Security posture is permissive by default. The tool does not ask for permission before running commands. An open GitHub issue documents potential RCE through provider-based configuration injection. Web-based config pulling by default is a supply-chain risk.
- Repository migration history. The original
opencode-ai/opencoderepository was archived September 2025 and moved toanomalyco/opencode. Users tracking the old repo may miss updates. - Commercial tier quality concerns. The OpenCode Go subscription ($10/month) was criticized on Hacker News as using lower-quality models (GLM-5) that produced “gibberish” compared to using top-tier models directly.
- Windows antivirus flags. The binary gets flagged by Windows AV due to its shell execution capabilities, creating deployment friction on Windows.
- Star count vs. actual usage. While 120K+ stars is impressive, the project benefits from the SST community’s existing large audience. Independent usage metrics are not available.