What It Does
each::labs is a pre-seed AI infrastructure startup that provides two main products: (1) an LLM router that aggregates 300+ AI models behind a single OpenAI-compatible API endpoint, and (2) klaw.sh, a kubectl-style CLI for AI agent fleet orchestration. The LLM router works by swapping the OpenAI SDK base URL to api.eachlabs.ai/v1 — existing code works with a one-line change. The router handles provider selection, auth profile rotation, and fallback chains automatically. klaw.sh is the company’s open-infrastructure play designed to drive adoption of the commercial router.
The company was originally focused on unified access to generative media models (image, video, audio) and has expanded into LLM routing and agent orchestration infrastructure.
Key Features
- LLM Router: Single API endpoint for 300+ models across Anthropic, OpenAI, Google, Azure, and open-source providers
- OpenAI SDK compatibility: Drop-in replacement requiring only a base URL change
- Pay-per-request pricing: No monthly fees, transparent per-request billing
- Automatic provider selection: Router selects optimal provider per request
- klaw.sh: Open-source (source-available) agent orchestration CLI built in Go
- Generative media platform: Unified access to image, video, and audio generation models
Use Cases
- Developers wanting multi-model access without managing multiple API keys: The router simplifies switching between models and providers for experimentation or cost optimization.
- klaw.sh users seeking a default LLM backend: The router is the path-of-least-resistance model provider for klaw.sh agent deployments.
Adoption Level Analysis
Small teams (<20 engineers): Reasonable fit. Pay-per-request pricing with no monthly minimums is accessible. The OpenAI SDK compatibility lowers the integration barrier. However, routing your LLM traffic through a pre-seed startup’s infrastructure adds latency and introduces a dependency on a company that may not exist in 12 months.
Medium orgs (20-200 engineers): Risky. Medium organizations typically need SLAs, uptime guarantees, and vendor stability assurances that a 9-person pre-seed startup cannot provide. The router adds a network hop and a single point of failure. Most medium orgs would prefer direct provider integrations or a more established router like OpenRouter.
Enterprise (200+ engineers): Does not fit. No SOC 2, no enterprise SLA, no data processing agreements published. Enterprises route LLM traffic through their own API gateways or use established providers directly.
Alternatives
| Alternative | Key Difference | Prefer when… |
|---|---|---|
| OpenRouter | Established LLM routing service with broader model catalog and community trust | You need a production-grade multi-model router with more established operational history |
| Direct provider APIs | No intermediary, lowest latency, direct SLA from Anthropic/OpenAI/Google | You have a primary model provider and do not need frequent model switching |
| Azure OpenAI Service | Enterprise-grade, SOC 2, HIPAA, with model deployment in your own Azure tenant | You need enterprise compliance, data residency, and contractual SLAs |
| LiteLLM | Open-source Python proxy for 100+ LLMs with load balancing and fallback | You want self-hosted routing with full control and no vendor dependency |
Evidence & Sources
- Eachlabs LLM Router Product Page
- Eachlabs Pre-Seed Funding Announcement
- ENA Venture Capital Portfolio Announcement
- klaw.sh GitHub Repository
Notes & Caveats
- Pre-seed stage, 9 employees: This is an extremely early-stage company. The funding amount is undisclosed, led by Right Side Capital (a spray-and-pray micro-VC fund) with ENA VC and Treeo VC. This funding profile suggests a small round (<$2M likely). Building infrastructure dependency on this company carries significant continuity risk.
- No published SLA or uptime history: No status page, no uptime commitments, no SLA documentation found.
- Data routing concerns: All LLM requests routed through
api.eachlabs.aipass through each::labs’ infrastructure. No published data handling policy, encryption-at-rest details, or SOC 2 attestation were found. For sensitive workloads, this is unacceptable. - “300+ models” is inflated: The count aggregates every model variant across every provider. This is standard marketing inflation for model aggregator services.
- Pivot risk: The company started as a generative media platform and expanded into LLM routing and agent orchestration. This breadth from a 9-person team suggests the company is still searching for product-market fit.
- klaw.sh as growth lever: klaw.sh appears designed to funnel users toward the each::labs router. While direct provider integrations exist, the default setup experience promotes the router.