Skip to content

RTK (Rust Token Killer)

★ New
trial
AI / ML open-source MIT open-source

At a Glance

A single Rust binary CLI proxy that transparently intercepts AI coding agent shell commands and compresses their output before it reaches the LLM context window, reporting 60-90% token reduction across 100+ supported development commands.

Type
open-source
Pricing
open-source
License
MIT
Adoption fit
small, medium
Top alternatives

What It Does

RTK is a Rust binary that sits between an AI coding agent and the terminal, intercepting shell command output and applying intelligent compression before the output is returned to the LLM context window. It installs as a single binary with no runtime dependencies and integrates via a PreToolUse hook that transparently rewrites shell commands — so git status becomes rtk git status without the developer or the AI agent changing its workflow.

The core insight is that shell command output is a significant and underappreciated source of context bloat in agentic coding sessions. A single cargo test run with 200 test cases can generate 25,000 tokens of raw output. A 30-line git log listing commit hashes, emails, and timestamps consumes far more context than the information the agent actually needs. RTK’s four compression strategies — smart filtering (removes noise, comments, boilerplate), grouping (aggregates files by directory, errors by type), truncation (preserves relevant context while cutting redundancy), and deduplication (collapses repeated log lines with counts) — reduce this waste systematically.

Key Features

  • PreToolUse hook integration: For Claude Code, installs a global hook that rewrites Bash commands transparently across all conversations and subagents — zero workflow change required
  • 10 AI coding environments supported: Claude Code, GitHub Copilot (VS Code), Cursor, Gemini CLI, Codex/OpenAI, Windsurf, Cline/Roo Code, OpenCode, OpenClaw, and Mistral Vibe (planned)
  • 100+ supported commands: File operations (ls, find, grep, diff), Git (status, log, diff, add, commit, push, pull), testing (cargo, pytest, vitest, rspec, go test, playwright), cloud (AWS CLI, EC2, Lambda, S3, DynamoDB), containers (Docker, Kubernetes), linters (ESLint, TypeScript, ruff, golangci-lint, rubocop)
  • Sub-10ms overhead: Compiled Rust binary with no network I/O or LLM calls — processing happens locally before output is returned
  • Built-in analytics: rtk gain displays cumulative token savings with daily breakdowns; rtk discover identifies highest-savings commands; rtk session provides per-session statistics with JSON export
  • YAML-based configuration: Per-command compression behavior configurable; “Tee” mode recovers full output when truncated output caused agent errors
  • Non-interactive CI/CD mode: --auto-patch flag for use in automated pipelines

Use Cases

  • Shell-heavy Claude Code sessions: Developers running frequent git, test, and build commands via the Bash tool who want to reduce context accumulation without changing workflow — RTK’s hook handles interception transparently
  • Cost-constrained teams on usage-based LLM pricing: Teams paying per-token who run multiple long agentic sessions per day; RTK’s savings compound via the “re-read tax” (a compressed command output gets re-read on every subsequent turn at reduced token cost)
  • Projects with verbose tooling: Monorepos with large test suites, AWS CLI-heavy infrastructure workflows, or Docker/Kubernetes deployments generating dense structured output
  • Multi-agent orchestration: Subagents spawned by a parent agent inherit the hook if installed globally (rtk init -g), achieving consistent compression across the full agent tree without per-subagent configuration

Adoption Level Analysis

Small teams (<20 engineers): Fits well. Installation is a single command (brew install rtk or cargo install) and the hook installs in under 30 seconds on Unix. No infrastructure dependency. Token savings are immediately visible via rtk gain. The zero-friction integration model suits individual developers and small teams who want cost reduction without process overhead.

Medium orgs (20–200 engineers): Fits with caveats. RTK works at the individual developer machine level — there is no centralized deployment model. Each developer installs independently. The MIT license and zero-infrastructure model make org-wide adoption operationally simple, but token savings are not pooled or centrally reported. For teams already using an LLM gateway (LiteLLM, Portkey), RTK provides complementary compression that the gateway cannot: per-command output filtering rather than model-level cost optimization.

Enterprise (200+ engineers): Does not fit as a primary token cost control mechanism. Enterprise token cost governance belongs at the LLM gateway layer with centralized budget enforcement, audit logs, and model routing policies. RTK provides no centralized policy, no access control, and no enterprise support. It may be useful as a supplementary developer tool, but should not be positioned as a cost governance solution at this scale.

Alternatives

AlternativeKey DifferencePrefer when…
CavemanOutput style constraint (system prompt); addresses model response verbosity rather than shell outputYour token cost is driven by AI response length rather than tool call output
LiteLLM gatewayModel routing and budget enforcement at the gateway layer; addresses cost per token rather than tokens per commandYou need org-wide cost governance, model switching, or centralized audit
LLMLingua (Microsoft)Algorithmic semantic compression of input prompts with formal accuracy guaranteesYou need input prompt compression with reproducible accuracy benchmarks
Serena MCPCode navigation tools that eliminate unnecessary file reads by the agentYour context bloat comes from the agent reading whole files it doesn’t need
Native tool optimizationPrefer Claude Code’s Read/Grep/Glob over Bash-wrapped equivalentsSessions are already dominated by native tool use rather than shell commands

Evidence & Sources

Notes & Caveats

  • Native tool bypass is the primary limitation. Claude Code’s built-in Read, Grep, Glob, Edit, and Write tools do not pass through RTK’s Bash hook. These tools are often responsible for a substantial fraction of context accumulation in code-review and refactoring sessions. RTK’s savings are structurally bounded to Bash tool calls. For sessions that rely heavily on native tools, total context reduction may be significantly below the headline 60-90%.
  • All benchmark figures are author-generated. The percentage savings claims are calibrated to “medium-sized TypeScript/Rust projects” — no independent reproduction methodology, no variance statistics, no controlled test against a baseline. Multiple community members corroborate the directional result, but the specific numbers should be treated as illustrative estimates rather than reproducible benchmarks.
  • Windows support is degraded. On Windows, the hook cannot execute transparently — the tool falls back to CLAUDE.md instruction injection, which adds per-turn overhead and depends on the agent following instructions rather than deterministic interception. Unix (macOS, Linux) users get full transparent hook support.
  • No disclosed maintainer identity or organizational backing. The rtk-ai GitHub org has no affiliated organization, no named individual maintainers, and no disclosed funding. With 215 open issues and 228 open PRs against 632 total commits, maintenance pressure is visible. This is a risk factor for long-term reliability.
  • Silent failure mode. If the RTK binary becomes unavailable (PATH issue, OS upgrade, conflicting package), Bash commands fall through to uncompressed output without notification. There is no documented fallback alerting mechanism.
  • Tee mode and hook complexity. The “Tee: Full Output Recovery” mechanism — which recovers full output when RTK-compressed output caused agent failures — adds configuration complexity. Hook files modify shell initialization scripts and AI tool configuration, creating potential interference with existing configurations.
  • Compression correctness is unverified. RTK filters and truncates command output deterministically, but there is no documented test suite validating that filtered output preserves the information the AI agent needs. If a test failure detail is elided as “noise,” the agent may make an incorrect diagnosis. Users should audit rtk discover output to identify high-impact commands and verify compression behavior before relying on it for production workflows.

Related