Portkey AI

★ New
assess
AI / ML vendor MIT (gateway), Commercial (platform) freemium

What It Does

Portkey is an enterprise-grade AI gateway and control plane for managing LLM access in production. It provides a unified API for routing requests to 250+ LLM providers with built-in failover, load balancing, semantic caching, guardrails, observability, and cost management. The platform is positioned as the production-focused alternative to LiteLLM, emphasizing throughput, reliability, and enterprise governance.

The company raised $15M Series A in February 2026 (led by Elevation Capital with Lightspeed participation), bringing total funding to $18M. In March 2026, Portkey open-sourced its full gateway under MIT license, including governance, observability, authentication, and cost controls. The platform reports processing 1T+ tokens and 120M+ AI requests daily across 24,000+ organizations.

Key Features

  • High-performance Go-based gateway: Compiled Go binary with single-digit microsecond routing overhead, significantly outperforming Python-based alternatives under high concurrency.
  • 250+ LLM provider support: OpenAI-compatible unified API with broad provider coverage.
  • Semantic caching: Caches semantically similar requests to reduce redundant LLM calls and cost.
  • Guardrails engine: Content moderation, PII detection, prompt injection filtering, and custom rule enforcement.
  • Enterprise governance: Role-based access control, audit trails, budget enforcement, and compliance controls.
  • MCP Gateway (new): Governs AI agent access to enterprise tools and systems via Model Context Protocol, with permissions, identity, and budget guardrails.
  • Observability: Built-in request logging, latency tracking, token usage analytics, and cost dashboards.
  • Open-source gateway (MIT): Full gateway including governance, observability, auth, and cost controls released as open source in March 2026.
  • Prompt management: Versioned prompt templates with A/B testing and rollback.
  • Failover and load balancing: Automatic provider switching on errors with configurable routing strategies.

Use Cases

  • Enterprise LLM governance at scale: Fortune 500 organizations managing LLM access across hundreds of teams with compliance, audit, and budget requirements.
  • High-throughput AI applications: Production systems requiring sustained 1,000+ RPS with consistent low latency.
  • Agentic AI governance: Managing AI agent access to tools via MCP with identity, permissions, and budget controls.
  • Multi-provider cost optimization: Automatically routing requests to the cheapest provider with semantic caching to reduce redundant calls.

Adoption Level Analysis

Small teams (<20 engineers): Moderate fit. The free tier provides basic gateway functionality, but the full value proposition (governance, audit trails, team-level controls) is overkill for small teams. LiteLLM SDK or OpenRouter SaaS may be simpler to start with.

Medium orgs (20-200 engineers): Strong fit. The managed platform eliminates the operational overhead of self-hosting a gateway while providing the governance features that platform teams need. Starting at $49/month, the cost is reasonable for organizations managing meaningful LLM spend.

Enterprise (200+ engineers): Primary target market. The Go-based performance, enterprise governance features, SOC2-oriented architecture, and $18M funding provide the operational maturity and vendor stability that enterprises require. The open-source gateway option allows self-hosting for data-sensitive deployments.

Alternatives

AlternativeKey DifferencePrefer when…
LiteLLMPython-based, larger community, more flexible SDKYou need maximum community support, Python ecosystem integration, or a lightweight SDK
OpenRouterFully managed SaaS, zero infrastructure, 5% markupYou want zero operational overhead and can tolerate third-party data routing
Vercel AI GatewayIntegrated with Vercel ecosystemYou are in the Vercel ecosystem and want native integration
Direct provider APIsNo intermediaryYou use a single provider and want maximum simplicity

Evidence & Sources

Notes & Caveats

  • Self-reported scale metrics. The “1T+ tokens daily” and “24,000+ organizations” claims are vendor-reported. No independent verification found. The $180M+ annualized AI spend management is an interesting metric but also unverifiable.
  • Recently open-sourced. The gateway was fully open-sourced in March 2026. The community around the open-source version is nascent compared to LiteLLM’s 41k+ star, 1,300+ contributor ecosystem. Long-term community health is unproven.
  • Vendor lock-in risk on platform features. While the gateway is open-source, the full platform (prompt management, advanced analytics, team management) requires the commercial tier. Organizations that rely on platform features face switching costs.
  • $18M funding is moderate. Well-funded enough to be credible but not so large as to guarantee long-term survival. Portkey will need to demonstrate revenue growth to raise further rounds.
  • MCP Gateway is new and unproven. The agentic AI governance angle is forward-looking but lacks production case studies or independent validation.
  • Competitor content warning. Much of the “LiteLLM problems” content found online (particularly on DEV Community) appears to be authored by Portkey-adjacent accounts promoting Portkey as the alternative. The technical claims about Python GIL limitations and PostgreSQL bottlenecks are real, but the framing is competitive marketing, not independent analysis.