Skip to content

Codel

★ New
hold
AI / ML open-source AGPL-3.0 open-source

At a Glance

Open-source autonomous AI coding agent (2024) that runs inside Docker with a web UI, executing tasks via terminal, browser automation, and a built-in file editor backed by PostgreSQL history.

Type
open-source
Pricing
open-source
License
AGPL-3.0
Adoption fit
small
Top alternatives

What It Does

Codel is a self-hosted autonomous AI coding agent that runs entirely inside Docker. Users submit tasks through a browser-based web UI; the agent then autonomously plans and executes steps using three built-in tools: a terminal for running shell commands, a browser (powered by go-rod) for web lookups, and a file editor for viewing and modifying code. All execution history and command outputs are stored in a PostgreSQL database for persistent review. The backend is written in Go; the frontend in TypeScript.

The project launched in March 2024 and briefly attracted attention as one of the first Docker-native autonomous agent implementations with a polished UI. Development stalled at v0.2.2 (April 2024) and has not kept pace with the fast-moving autonomous coding agent landscape.

Key Features

  • Autonomous task execution loop: terminal + browser + editor without human checkpointing
  • Docker-based sandbox isolates agent actions from the host (via nested container creation)
  • go-rod browser automation for real-time web information retrieval during task execution
  • Built-in file editor displays modified files in the web UI as the agent works
  • PostgreSQL-backed persistence stores full command history and outputs across sessions
  • OpenAI support (default: gpt-4-0125-preview) with configurable model and endpoint
  • Ollama integration for local/self-hosted model usage via OLLAMA_MODEL and OLLAMA_SERVER_URL
  • Single docker run deployment with environment variable configuration
  • AGPL-3.0 license ensuring all modifications must be open-sourced

Use Cases

  • Use case 1: Local experimentation with the autonomous agent-in-Docker pattern on personal development tasks
  • Use case 2: Reference implementation for studying the architecture of Docker-native coding agents (terminal + browser + editor triad)
  • Use case 3: Privacy-sensitive or air-gapped environments where self-hosted LLM via Ollama is required and task complexity is modest

Adoption Level Analysis

Small teams (<20 engineers): Possible for individual experimentation. Setup is a single Docker command. However, stalled development, no benchmark data, and the Docker socket security issue make it a poor choice even for small teams with any production intent. Better alternatives (OpenHands, OpenCode) are more actively maintained.

Medium orgs (20-200 engineers): Does not fit. No multi-user support, no API, no integrations with issue trackers or CI/CD. The project is effectively unmaintained.

Enterprise (200+ engineers): Does not fit. AGPL-3.0 licensing alone is a blocker for many enterprise legal teams, and the project lacks any enterprise-oriented features (RBAC, audit logging, SSO, team management).

Alternatives

AlternativeKey DifferencePrefer when…
OpenHandsActively maintained, published benchmarks (77.6% SWE-bench), cloud + Kubernetes support, model-agnosticYou want a production-grade Docker-native agent with community backing
OpenCodeMIT-licensed, TUI + desktop, lighter footprint, active developmentYou want a simpler self-hosted agent without Docker orchestration overhead
Goose (Block)MCP-native, AAIF governance, strong communityYou want MCP ecosystem integration and a community-governed agent
Codex (OpenAI)Managed SaaS, OpenAI-only, fire-and-forget async modelYou want a managed autonomous agent without infrastructure overhead
E2BPurpose-built Firecracker microVM sandbox, API-firstYou need a secure, programmatic sandbox for AI-generated code execution

Evidence & Sources

Notes & Caveats

  • Stalled development: Last release v0.2.2 was April 2024. The project has not been updated to support newer model APIs (GPT-4o, Claude, Gemini) or modern agent patterns. This is a significant gap given how fast the space evolved in 2024-2026.
  • Docker socket security: The required --volume /var/run/docker.sock:/var/run/docker.sock mount grants the agent container effective root access to the host. This is a well-known Docker security anti-pattern. Purpose-built agent sandboxes (E2B, Microsandbox) avoid this via Firecracker or gVisor-based isolation.
  • AGPL-3.0 licensing: Any software that incorporates Codel’s code or runs it as a networked service must release all modifications under AGPL-3.0. This is a practical blocker for commercial use cases.
  • No benchmarks published: Unlike all major 2025-2026 autonomous coding agents, Codel has no published SWE-bench, HumanEval, or equivalent evaluation. Performance on complex tasks is unverifiable.
  • Local model quality: Ollama support was designed for llama2-era models. Performance on autonomous coding tasks with llama2-class models is known to be poor industry-wide. The path is architecturally available but not practically useful for complex work.
  • Historical value: Codel is a useful reference for understanding the early Docker-native autonomous agent architecture that OpenHands and others later built upon and refined.

Related