Untether — Telegram Bridge for AI Coding Agents
Little Bear Apps April 11, 2026 open-source-project medium credibility
View source
Referenced in catalog
Untether — Telegram Bridge for AI Coding Agents
Source: github.com/littlebearapps/untether | Author: Little Bear Apps | Published: 2026-02-07 Category: open-source-project | Credibility: medium
Executive Summary
- Untether is an MIT-licensed Python daemon that bridges six CLI coding agents (Claude Code, Codex, OpenCode, Pi, Gemini CLI, Amp) to a personal Telegram bot. It runs on your local machine or server and lets you send tasks by voice or text from your phone, stream live tool-call and file-change progress back to Telegram, and approve or deny agent actions via inline keyboard buttons — without opening a terminal.
- The project was created in February 2026 by Little Bear Apps and is a fork of banteg’s
takopi(originally a Codex-only Telegram bridge). It has grown to v0.35.0 in under two months, suggesting active development velocity. The PyPI package installs viauv tool install untetherand requires Python 3.12+. At review time it has 31 GitHub stars, which is modest but consistent with a niche developer tool. - Untether’s value proposition is unambiguously developer-focused: it solves the “chained to a terminal” problem for developers who run long-running coding agent sessions and want to monitor and steer them from a phone. It is not an AI platform, not an agent framework, and not a hosted service — it is a local bridge daemon with Telegram as the transport layer.
Critical Analysis
Claim: Works with 6 coding agents — Claude Code, Codex, OpenCode, Pi, Gemini CLI, Amp
- Evidence quality: open-source code (directly verifiable)
- Assessment: Confirmed. The source tree contains dedicated runner modules at
src/untether/runners/for each of the six agents:claude.py,codex.py,opencode.py,pi.py,gemini.py,amp.py. Each is registered as aproject.entry-points."untether.engine_backends"plugin inpyproject.toml, making the engine system properly extensible. The feature compatibility matrix in the README is granular and honest — for example, interactive approvals, plan mode, and diff preview are listed as Claude Code-only, not false claimed for all engines. - Counter-argument: Integration depth varies substantially by engine. Claude Code integration is clearly the most mature (interactive permissions, plan mode, ask mode, diff preview, progressive cooldown, subscription usage tracking). Amp has the least feature parity — no interactive permissions, no cost tracking, no cross-environment resume. Users primarily running Codex, OpenCode, Pi, or Gemini CLI will get basic streaming and session resume but not the richer approval workflows.
- References:
Claim: “Stream progress live” — watch tool calls and file changes in real time
- Evidence quality: open-source code + screenshots
- Assessment: The architecture supports this.
src/untether/progress.pyandsrc/untether/presenter.pyhandle event-to-Telegram rendering. Telegram’s bot API supports inline message editing, which Untether uses to stream live updates into a single evolving message rather than flooding the chat with separate messages. This is the correct UX choice for long-running agent sessions. Therunner_bridge.py/runner.py/backends.pystack handles process management and stdout/stderr capture. - Counter-argument: Telegram’s bot API has a rate limit on message edit frequency (roughly 1 edit/second per message). For high-throughput agents generating rapid tool calls, updates may be throttled or batched in ways that reduce real-time granularity. This is a platform constraint, not a code deficiency, but it means “real-time” has a ceiling.
- References:
Claim: “Approve plan transitions and answer clarifying questions with inline option buttons”
- Evidence quality: open-source code (verifiable in claude.py runner)
- Assessment: This feature is Claude Code-specific and depends on Claude Code’s structured output (plan mode transitions, ask-user events). The mechanism is sound: the runner captures Claude Code’s permission request events and posts inline keyboard buttons to Telegram; the user’s button tap is forwarded back as a response. The “progressive cooldown” detail — where “Pause & Outline Plan” triggers increasing auto-approve delays — is a thoughtful safety mechanism to prevent runaway agent behavior.
- Counter-argument: This requires Claude Code’s permission model to emit structured events that Untether can intercept. If Claude Code’s internal event format changes (it has no published stable API), this feature could silently break. The project acknowledges it “forks” takopi; tight coupling to undocumented agent internals is an ongoing maintenance liability.
Claim: Voice notes transcribed via configurable Whisper-compatible endpoint
- Evidence quality: documentation + code (src/untether contains voice handling)
- Assessment: The feature is real and architecturally sound. Untether accepts Telegram voice messages and sends them to a Whisper-compatible transcription endpoint (configurable, not hard-coded to OpenAI). This allows self-hosted Whisper (via faster-whisper, whisper.cpp, or local API servers) or third-party APIs. Using the OpenAI SDK’s audio transcription interface (
openai>=2.15.0is listed as a dependency) gives flexibility. - Counter-argument: Voice transcription quality is entirely dependent on the configured endpoint. The dependency on
openaiSDK for voice (even when not using OpenAI for coding) may surprise users who run fully offline setups. The latency from voice note upload → transcription API round-trip → agent task start could be 3-8 seconds on typical connections.
Claim: Cost and usage tracking, per-run and daily budgets
- Evidence quality: open-source code (
src/untether/cost_tracker.py) - Assessment: A
cost_tracker.pymodule exists and is dedicated to this feature. Budget enforcement (max_cost_per_run,max_cost_per_day) and/usagecommand support are documented. For Claude Code, this uses subscription usage APIs. For other engines, the README honestly marks this as “token count only — no USD cost reporting” (footnote ³ in the compatibility matrix), which is an accurate qualification. - Counter-argument: Cost tracking for AI coding agents is inherently imprecise — token counting misses cached tokens, prompt engineering overhead, and model-specific discounting. Users relying on Untether’s budget controls for financial guardrails should treat them as advisory, not authoritative.
Architecture and Security Assessment
- Attack surface: Untether runs a process with filesystem access to your repos and executes CLI agents with shell command capabilities. The Telegram bot token in
~/.untether/untether.tomlis the sole authentication factor — anyone with the bot token can send commands to your agents. The README explicitly warns never to commituntether.toml. - Telegram as auth: This is a pragmatic but meaningful security boundary. Telegram provides end-to-end encryption for messages and the bot only responds to the configured
chat_id. This is not enterprise-grade auth (no MFA, no audit log, no RBAC), but it is reasonable for a personal developer tool. - Process isolation: None. Untether runs agents with full user permissions on the host machine. There is no sandboxing layer between the agent and the filesystem. This is consistent with how Claude Code and other CLI agents work locally, but it means a compromised Telegram account = compromised development machine.
- Dependency hygiene:
bandit(SAST) andpip-audit(dependency CVE scanning) are in the dev dependency group and run in CI, which is a positive signal for a project of this maturity.
Credibility Assessment
- Author background: Little Bear Apps is a small indie development studio. The project is a fork of banteg’s
takopi(an Ethereum developer known for Yearn Finance tooling), which provides some technical lineage. The maintainers have released v0.35.0 in ~2 months, indicating genuine active development. - Publication bias: This is a GitHub README — project marketing, not independent analysis. The feature compatibility matrix is notably honest (clearly marking gaps), which raises credibility above typical project marketing.
- Ecosystem fit: The project nests cleanly into the emerging “AI coding agent remote control” pattern. It serves a real workflow gap not addressed by any of the six supported agents themselves (none of them offer mobile remote control natively).
- Verdict: medium — The project is real, functional, and actively maintained with honest documentation. Low star count (31) reflects niche fit, not lack of quality. The main risks are tight coupling to undocumented agent internals and single-person-team bus factor.