Overview
gptme is an open-source, locally-runnable AI agent CLI created in March 2023 by Erik Bjare. It wraps any major LLM — including fully local models via llama.cpp — in a terminal interface and gives it direct, unconstrained access to your shell, file system, browser, and desktop. With 4,200+ GitHub stars and active development through 2026, it is a credible, mature alternative to commercial agent CLIs like Claude Code or Codex CLI.
The core loop: the user prompts, the model selects tools, tools execute in the local environment, output feeds back to the model, and the model self-corrects until the task is done.
Key Tools
| Tool | Description |
|---|---|
shell | Execute shell commands in your terminal |
ipython | Run Python code with installed libraries |
read/save/patch/morph | Full file system access including incremental patching |
browser | Playwright-based web search and navigation |
vision | Process images and screenshots |
computer | Full desktop GUI access (macOS computer use) |
tmux | Long-lived commands in persistent terminal sessions |
subagent | Spawn sub-agents for parallel or isolated tasks |
rag | Retrieval Augmented Generation from local files |
gh | GitHub CLI integration |
LLM Provider Support
- Anthropic (Claude)
- OpenAI (GPT-4o, o1, o3)
- Google (Gemini)
- xAI (Grok)
- DeepSeek
- OpenRouter (100+ models)
- Local models via llama.cpp (no API key required)
Extensibility Model
- Plugins — Python packages registered via
gptme.toml, adding tools, hooks, and commands - Skills — Lightweight workflow bundles (Anthropic format) that auto-load when mentioned by name
- Lessons — Contextual guidance auto-injected into conversations by keyword/tool/pattern matching
- Hooks — Lifecycle callbacks at key events (before/after tool call, conversation start)
Community plugins in gptme-contrib include: multi-model consensus, image generation, LSP integration, and work-state persistence.
Integrations
- MCP (Model Context Protocol): Dynamic discovery and loading of any MCP server as a tool source
- ACP (Agent Client Protocol): Drop-in coding agent for Zed and JetBrains IDEs
- Web UI: Modern self-hostable interface at
chat.gptme.orgviagptme-webui - REST API: Built-in server with REST API (
gptme-server) - gptme.vim: Vim plugin for inline AI assistance
Autonomous Agent Scaffold
The gptme-agent-template enables persistent autonomous agents:
- Git-tracked “brain” (journal, tasks, knowledge base, lessons)
- Scheduled run loops via systemd/launchd
- GTD-style task queue with YAML metadata
- Meta-learning (lessons system captures behavioral patterns)
- Multi-agent coordination (file leases, message bus, work claiming)
- External integrations: GitHub, email, Discord, Twitter, RSS
Reference agent “Bob” (TimeToBuildBob) has completed 1,700+ autonomous sessions and actively contributes to the gptme repo.
Trade-offs
Strengths:
- Fully unconstrained: no sandboxing, works directly in your environment
- Provider-agnostic including local models — no cloud lock-in
- Active development with high release cadence (100+ features per dev cycle)
- Evaluation suite for model capability testing
- MCP/ACP integrations connect to the broader AI tooling ecosystem
Weaknesses:
- Unconstrained execution is a liability without wrapper policies in team contexts
- Still pre-1.0 (v0.31 dev as of April 2026), API not yet stable
- Python 3.10+ required; no native Windows support (WSL needed)
- Cloud service (gptme.ai) and desktop app (gptme-tauri) are still WIP
Usage
# Install
pipx install gptme
# Interactive session
gptme
# Non-interactive autonomous mode
gptme -n 'run the test suite and fix any failing tests'
# With browser support
pipx install 'gptme[browser]'
# All extras
pipx install 'gptme[all]'