Skip to content

gptme: Your Personal AI Agent in the Terminal

Erik Bjare (ErikBjare) April 11, 2026 high credibility

Referenced in catalog

Summary

gptme is an open-source, locally-runnable AI agent CLI that gives a language model direct access to your terminal, file system, browser, and desktop. Created in March 2023 — one of the first agent CLIs — it predates Claude Code, Codex CLI, and Cursor Agents. With 4,200+ GitHub stars and still in very active development (v0.31 in April 2026), it occupies a distinct niche: a self-hostable, unconstrained, extensible AI agent you fully control.

What It Does

gptme wraps any major LLM (Claude, GPT, Gemini, Grok, DeepSeek, local via llama.cpp) in a terminal interface and gives it a rich built-in toolset:

  • shell: Execute shell commands in your local environment
  • ipython: Run Python code with your installed libraries
  • read/save/patch/morph: Full file system read-write-edit access
  • browser: Playwright-based web search and navigation
  • vision: Process and analyze images and screenshots
  • computer: Full desktop GUI access (macOS computer use)
  • tmux: Long-lived commands in persistent terminal sessions
  • subagent: Spawn sub-agents for parallel or isolated tasks
  • rag: Local file retrieval augmented generation
  • gh: GitHub CLI integration

The key differentiator is that output from every tool is fed back to the model, enabling self-correction loops without human intervention.

Architecture and Extensibility

gptme has a layered extensibility model:

  1. Plugins (Python packages): custom tools, hooks, commands via gptme.toml
  2. Skills (Anthropic-format bundles): lightweight workflow packages that auto-load when mentioned
  3. Lessons: contextual guidance auto-injected into conversations based on keywords and patterns
  4. Hooks: lifecycle callbacks (before/after tool calls, conversation start)

Community plugins in gptme-contrib cover multi-model consensus, image generation, LSP integration, and state persistence.

MCP (Model Context Protocol) is supported: any MCP server can be dynamically discovered and loaded as a tool source. ACP (Agent Client Protocol) makes gptme usable as a drop-in coding agent from Zed and JetBrains IDEs.

Autonomous Agent Capabilities

The gptme-agent-template scaffold enables persistent autonomous agents with:

  • Git-tracked “brain” (journal, tasks, knowledge base, lessons)
  • Scheduled run loops via systemd/launchd
  • GTD-style task queue with YAML metadata
  • Meta-learning via the lessons system
  • Multi-agent coordination (file leases, message bus, work claiming)
  • External integrations: GitHub, email, Discord, Twitter, RSS

The reference agent “Bob” has completed 1,700+ autonomous sessions and actively contributes to the gptme repo itself — opening PRs, fixing CI, and posting on Twitter.

LLM Provider Support

  • Anthropic (Claude)
  • OpenAI (GPT-4o, o1, o3)
  • Google (Gemini)
  • xAI (Grok)
  • DeepSeek
  • OpenRouter (100+ models)
  • Local via llama.cpp (no API key required)

Recent Development (2025–2026)

  • v0.31.0 (Dec 2025): Background jobs, form tool, cost tracking, content-addressable storage
  • v0.30.0 (Nov 2025): Plugin system, context compression, subagent planner mode
  • v0.29.0 (Oct 2025): Lessons system, MCP discovery & dynamic loading, token awareness
  • v0.28.0 (Aug 2025): MCP support, morph tool for fast edits, auto-commit, redesigned server API
  • v0.27.0 (Mar 2025): Pre-commit integration, macOS computer use, Claude 3.7 Sonnet, DeepSeek R1

Development pace is high: the April 2026 dev builds show 100+ feature commits since the last stable.

Critical Assessment

Strengths:

  • Genuinely unconstrained: no sandboxing, no guardrails by default — your environment, your risk management
  • Provider-agnostic: works with any LLM including fully local ones, avoiding cloud lock-in
  • Mature extensibility: plugins, skills, lessons, hooks cover virtually any customization
  • Active community and contributor-bot (“Bob”) dogfoods the tool, which is a credibility signal
  • Evaluation suite for testing model capabilities is an unusual and valuable addition
  • MCP and ACP integrations connect it to the broader AI tooling ecosystem

Weaknesses / Watch-outs:

  • The –y (auto-approve) and –n (fully autonomous) modes require real trust in the LLM — destructive shell commands can execute without confirmation
  • “Unconstrained” is a feature for power users but a liability in team/enterprise contexts without wrapper policies
  • Still pre-1.0 (v0.31 dev builds), API stability is not guaranteed
  • Python 3.10+ requirement; no native Windows support (WSL required)
  • Cloud service (gptme.ai) and desktop app (gptme-tauri) are still WIP

Positioning: gptme sits squarely between a personal coding assistant (Claude Code, Cursor) and a full agent framework (LangChain, CrewAI). It is more opinionated and ready-to-use than a framework, but more hackable and self-hostable than commercial alternatives. For a Technical Director evaluating “local-first AI automation tooling for individual developers or small teams,” it is a credible Trial candidate — especially for teams already comfortable with CLI-centric workflows.

Recommendation

Radar position: Trial — The tool is mature enough for serious use, the autonomous agent capabilities are genuinely novel, and the provider-agnostic design avoids lock-in. The pre-1.0 status and lack of enterprise guardrails keep it from Adopt. Engineers who want to experiment with local or self-hosted AI agents without framework overhead should evaluate it.