# Tekai > Curated technology intelligence for engineering leaders: tools, frameworks, and patterns rated on a technology radar with evidence-based reviews. ## Key pages - [Technology Catalog](https://tekai.dev/catalog): browse and filter reviewed tools, frameworks, and patterns. - [Technology Radar](https://tekai.dev/radar): interactive Adopt / Trial / Assess / Hold map. - [References](https://tekai.dev/references): critical reviews of external articles and vendor claims. - [About](https://tekai.dev/about): methodology, radar ring definitions, and editorial process. ## Adopt Technologies we recommend using today. Proven at scale with strong team familiarity. - [Agent Skills Specification](https://tekai.dev/catalog/agent-skills-specification): An open standard for packaging reusable procedural knowledge as markdown files that AI coding agents can discover, load, and use across 30+ tools. - [Aider](https://tekai.dev/catalog/aider): Open-source terminal AI coding agent that uses a tree-sitter repo map and multi-mode diff engine to pair-program with LLMs across 100+ languages, with first-class git integration and support for virtually every LLM provider. - [Anthropic](https://tekai.dev/catalog/anthropic): AI safety company behind the Claude model family — including Claude Opus, Sonnet, Haiku, and the restricted Claude Mythos Preview — with $380B valuation, $14B ARR, and Constitutional AI as its core alignment technique. - [CAP Theorem](https://tekai.dev/catalog/cap-theorem): Proven theorem: a distributed data store can guarantee only two of three properties — Consistency, Availability, Partition Tolerance. Since partition tolerance is always required in practice, the true design trade-off is C vs. A during network partitions. - [Conway's Law](https://tekai.dev/catalog/conways-law): Empirically supported organizational principle stating that software systems inevitably mirror the communication structure of the teams that build them; the inverse maneuver (restructuring teams to achieve a target architecture) is widely used in microservices and platform engineering. - [Google DeepMind](https://tekai.dev/catalog/google-deepmind): Google's combined AI research and products division behind the Gemini model family, with Gemini 3.1 Pro ranking #1 on 12 of 18 tracked benchmarks in 2026 and 1M-token context windows available via Gemini API and Google Cloud Vertex AI. - [Hugging Face Transformers](https://tekai.dev/catalog/huggingface-transformers): The de facto standard Python library for accessing, fine-tuning, and deploying transformer-based models across NLP, vision, audio, and multimodal tasks, with unified APIs for 500,000+ pretrained models on Hugging Face Hub. - [Mechanical Sympathy](https://tekai.dev/catalog/mechanical-sympathy): A software design philosophy, coined by Martin Thompson from motorsport, that aligns program behavior with underlying hardware constraints — CPU cache hierarchy, memory access patterns, and concurrency primitives — to achieve lower latency and higher throughput without additional hardware. - [OpenAI](https://tekai.dev/catalog/openai): Frontier AI lab behind GPT-5, o3, DALL-E, Sora, and Whisper, operating ChatGPT (the world's leading AI consumer product) alongside an enterprise API platform with $20B+ annual revenue and an $852B valuation. - [Retrieval-Augmented Generation (RAG)](https://tekai.dev/catalog/retrieval-augmented-generation): An LLM inference pattern that injects relevant documents retrieved from an external corpus into the model's context at query time, grounding responses in up-to-date or domain-specific information without retraining. - [Software Engineering Principles (Collection)](https://tekai.dev/catalog/software-engineering-principles): The canonical collection of named software engineering laws, heuristics, and principles — from Brooks's Law and Conway's Law to YAGNI, DRY, Hyrum's Law, and the Testing Pyramid — that form the shared vocabulary of software practitioners for reasoning about complexity, quality, and team dynamics. - [TanStack Query](https://tekai.dev/catalog/tanstack-query): Async server-state management and data-fetching library for React (and other frameworks) with automatic caching, background refresh, and optimistic updates; ~12–16M weekly npm downloads. - [TanStack Table](https://tekai.dev/catalog/tanstack-table): Headless, framework-agnostic table and data-grid library providing sorting, filtering, pagination, and virtualization logic without any UI — you own the markup and styles. - [Technical Debt](https://tekai.dev/catalog/technical-debt): Ward Cunningham's 1992 financial metaphor for the cost accumulated when expedient code shortcuts trade short-term delivery speed for long-term maintenance burden; the concept has expanded into a multi-dimensional framework covering code, design, architecture, test, and documentation debt. - [Tree-sitter](https://tekai.dev/catalog/tree-sitter): Incremental parser generator and parsing library that builds concrete syntax trees for source files and updates them efficiently on edit, supporting 100+ programming languages and used by Neovim, GitHub, and AI coding tools. - [vLLM](https://tekai.dev/catalog/vllm): High-throughput open-source LLM inference and serving engine using PagedAttention for memory-efficient KV cache management, achieving 2–24x throughput improvements over naive serving approaches. ## Trial Technologies worth pursuing. Understand how they will work in your context; contain the risk to one or two projects. - [Agent Harness Pattern](https://tekai.dev/catalog/agent-harness-pattern): Architectural pattern where all non-model code surrounding an LLM (planning, tools, sub-agents, context management) is packaged as a reusable harness. - [AGENTS.md](https://tekai.dev/catalog/agents-md): An open cross-platform specification for a repository-root Markdown file that provides AI coding agents with project context, build steps, conventions, and task instructions — stewarded by the Linux Foundation under the Agentic AI Foundation. - [Apple MLX](https://tekai.dev/catalog/apple-mlx): Apple's open-source array framework for machine learning on Apple Silicon, providing unified CPU/GPU memory semantics, NumPy-compatible APIs, and multi-language support (Python, Swift, C, C++) for on-device training and inference. - [Augment Code](https://tekai.dev/catalog/augment-code): AI coding agent platform for professional software teams, built around a proprietary Context Engine that semantically indexes entire codebases to power IDE agents, code review, and CLI tooling. - [Backstage](https://tekai.dev/catalog/backstage): Open-source CNCF framework by Spotify for building internal developer portals with a software catalog, service templates, TechDocs, and an extensive plugin ecosystem. - [Cedar Policy Language](https://tekai.dev/catalog/cedar-policy-language): A declarative authorization policy language by Amazon that expresses fine-grained access control as human-readable permit/forbid statements with formal verification. - [Chaos Engineering](https://tekai.dev/catalog/chaos-engineering): Discipline of deliberately injecting failures and faults into production or staging systems to expose hidden weaknesses before they cause unplanned outages, originated by Netflix's Chaos Monkey in 2011. - [ChromaDB](https://tekai.dev/catalog/chromadb): Open-source AI-native vector database designed for prototyping and RAG applications, with a 2025 Rust-core rewrite adding hybrid search and a managed cloud offering; widely used but not designed for 50M+ vector production workloads. - [Claude Code](https://tekai.dev/catalog/anthropic-claude-code): Anthropic's terminal-based AI coding agent with file access, command execution, layered memory, and MCP integration. - [Clerk](https://tekai.dev/catalog/clerk): Managed authentication platform with drop-in React/Next.js UI components for sign-in, user management, and multi-tenant organizations. - [Cline](https://tekai.dev/catalog/cline): Open-source VS Code extension providing an autonomous AI coding agent with a Plan/Act workflow, multi-provider LLM support, browser automation, and MCP integration. - [Cloudflare AI Gateway](https://tekai.dev/catalog/cloudflare-ai-gateway): Managed LLM proxy on Cloudflare's edge network providing unified observability, caching, rate limiting, and multi-provider routing with a generous free tier and zero infrastructure overhead. - [Codex CLI](https://tekai.dev/catalog/codex-cli): OpenAI's open-source terminal AI coding agent with OS-level sandboxing, subagent delegation, and AGENTS.md support. - [DeepEval](https://tekai.dev/catalog/deepeval): Open-source Apache-2.0 LLM evaluation framework by Confident AI with 50+ metrics spanning RAG, agents, multi-turn conversations, safety, and multimodal evaluation; pytest-native for CI/CD deployment gates. - [DORA Metrics](https://tekai.dev/catalog/dora-metrics): Four evidence-based software delivery performance metrics — Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Time to Restore Service — derived from the DevOps Research and Assessment program's multi-year surveys of 33,000+ practitioners. - [E2B](https://tekai.dev/catalog/e2b): Managed cloud platform providing ephemeral Firecracker microVM sandboxes for AI agent code execution with sub-200ms cold starts. - [Emdash](https://tekai.dev/catalog/emdash): An open-source Agentic Development Environment (ADE) that runs multiple coding agents concurrently in isolated Git worktrees, with ticket integration, diff review, and PR management across 23+ AI agent providers. - [Git Worktrees](https://tekai.dev/catalog/git-worktrees): A built-in Git feature (since v2.5) that allows multiple working directories to be checked out from a single repository simultaneously, enabling parallel branch development and conflict-free multi-agent AI coding workflows. - [GitOps](https://tekai.dev/catalog/gitops): Operational pattern using Git repositories as the single source of truth for declarative infrastructure and application state, with automated reconciliation loops that continuously enforce desired state. - [Goose](https://tekai.dev/catalog/block-goose): An open-source, MCP-native on-machine AI agent by Block that autonomously executes multi-step development workflows with any LLM provider. - [gptme](https://tekai.dev/catalog/gptme): Personal AI agent CLI that gives LLMs direct access to the terminal, file system, browser, and desktop. Provider-agnostic (Claude, GPT, Gemini, local llama.cpp) with a rich built-in toolset, plugin extensibility, MCP support, and a persistent autonomous agent scaffold. - [Graphite](https://tekai.dev/catalog/graphite): Developer productivity platform for stacked pull requests on GitHub, with a CLI, merge queue, and AI-assisted code review. - [Harness](https://tekai.dev/catalog/harness): Commercial DevOps platform-of-platforms with 14+ modules covering CI/CD, GitOps, chaos engineering, feature flags, cloud cost management, and security testing; $5.5B valuation, Series E funded. - [Haystack (deepset)](https://tekai.dev/catalog/haystack-deepset): Open-source Python AI orchestration framework by deepset for building production-ready LLM applications, RAG pipelines, and agent workflows with modular pipeline architecture; 24k+ GitHub stars with enterprise customers including Airbus, Netflix, and NVIDIA. - [Impeccable](https://tekai.dev/catalog/impeccable): Open-source Agent Skills package providing 20 design commands, 7 reference domains, and anti-patterns to improve AI-generated frontend UI quality across Claude Code, Cursor, Gemini CLI, and 7 other coding agents. - [Inspect AI](https://tekai.dev/catalog/inspect-ai): An open-source LLM evaluation framework by the UK AI Safety Institute with 100+ pre-built evals for safety, coding, reasoning, and agent assessment. - [LangChain](https://tekai.dev/catalog/langchain): Open-source framework and commercial platform for building LLM-powered applications and stateful agent workflows. - [Langfuse](https://tekai.dev/catalog/langfuse): Open-source LLM engineering platform (MIT-licensed, 21k+ GitHub stars) covering observability traces, evaluation, prompt management, and datasets; self-hostable in minutes; acquired by ClickHouse in January 2026. - [LangGraph](https://tekai.dev/catalog/langgraph): A graph-based runtime for building stateful, multi-step AI agent workflows with persistence, checkpointing, and human-in-the-loop capabilities. - [LibreChat](https://tekai.dev/catalog/librechat): A self-hosted AI chat platform providing a unified ChatGPT-like interface for multiple LLM providers with MCP integration, agents, and code execution. - [LiveCodeBench](https://tekai.dev/catalog/livecodebench): Contamination-resistant LLM coding benchmark that continuously collects new competitive programming problems from LeetCode, AtCoder, and Codeforces, with versions tracking model performance over time. - [LlamaIndex](https://tekai.dev/catalog/llamaindex): Open-source MIT-licensed data framework for building RAG and document agent applications on top of LLMs, with 38k+ GitHub stars, built-in evaluation utilities, and a commercial cloud platform; $19M Series A in March 2025. - [LLM Gateway Pattern](https://tekai.dev/catalog/llm-gateway-pattern): Proxy layer between applications and LLM providers that centralizes auth, cost tracking, rate limiting, failover, and observability. - [LLM Wiki Pattern](https://tekai.dev/catalog/llm-wiki-pattern): A knowledge management pattern where an LLM agent incrementally compiles raw source documents into a persistent, interlinked markdown wiki rather than retrieving raw documents at query time. - [Milvus](https://tekai.dev/catalog/milvus): Apache-2.0 distributed vector database for billion-scale similarity search, built for cloud-native Kubernetes deployment with GPU acceleration, multiple index types (HNSW, DiskANN, IVF), and sparse+dense hybrid search; the leading open-source vector database at 44k+ GitHub stars. - [mlx-lm](https://tekai.dev/catalog/mlx-lm): Apple Silicon LLM inference, fine-tuning, and quantization package built on MLX, supporting thousands of Hugging Face Hub models with LoRA/QLoRA, 4-bit quantization, and an OpenAI-compatible server for local Mac deployment. - [Model Context Protocol (MCP)](https://tekai.dev/catalog/model-context-protocol): An open standard by Anthropic that defines how AI assistants connect to external tools, data sources, and services via a JSON-RPC protocol. - [Neovate Code](https://tekai.dev/catalog/neovate-code): Open-source CLI coding agent from Ant Group with a Vite-style plugin architecture, 30+ LLM providers, MCP integration, sub-agent orchestration, and headless mode. - [Obsidian](https://tekai.dev/catalog/obsidian): A local-first markdown note-taking and personal knowledge management application that stores all notes as plain text files with bi-directional linking, a graph view, and an extensive plugin ecosystem. - [Ollama](https://tekai.dev/catalog/ollama): An open-source local LLM inference engine that simplifies downloading, running, and managing large language models on personal hardware with a single command. - [Open WebUI](https://tekai.dev/catalog/open-webui): A self-hosted, provider-agnostic web interface for LLMs with built-in RAG, MCP support, RBAC, and Ollama integration for local model inference. - [OpenHands](https://tekai.dev/catalog/openhands): An open-source platform for autonomous AI coding agents with Docker-sandboxed execution, multi-model support, and a Python SDK for agent orchestration. - [OpenLLMetry](https://tekai.dev/catalog/openllmetry): Open-source OpenTelemetry-based instrumentation library for LLM applications, providing standardized traces, metrics, and logs across 16+ LLM providers, 7 vector databases, and 10 AI frameworks. - [Progressive Delivery](https://tekai.dev/catalog/progressive-delivery): Deployment pattern that gradually shifts traffic to new software versions using canary releases, blue-green switches, or feature flags, enabling measurable risk reduction with automated rollback on detected degradation. - [RAGAS](https://tekai.dev/catalog/ragas): Open-source Apache-2.0 evaluation framework for RAG pipelines and LLM applications by ExplodingGradients (YC W24), providing reference-free metrics including Faithfulness, Answer Relevancy, Context Precision, and Context Recall. - [Ralph Loop Pattern](https://tekai.dev/catalog/ralph-loop-pattern): Autonomous agent pattern that runs an AI coding agent in a repeating bash loop with fresh context per iteration, driven by a structured task list. - [RTK (Rust Token Killer)](https://tekai.dev/catalog/rtk): A single Rust binary CLI proxy that transparently intercepts AI coding agent shell commands and compresses their output before it reaches the LLM context window, reporting 60-90% token reduction across 100+ supported development commands. - [Spec-Driven Development](https://tekai.dev/catalog/spec-driven-development): Development pattern where structured specification documents are written before code and serve as the primary input for AI coding agents. - [Stacked Diffs](https://tekai.dev/catalog/stacked-diffs): Code review workflow where large changes are split into a chain of small, dependent PRs that are reviewed and landed sequentially. - [Supabase](https://tekai.dev/catalog/supabase-platform): Open-source Firebase alternative providing managed PostgreSQL, authentication, storage, and serverless Edge Functions as a Backend-as-a-Service; 4M+ developers, $70M ARR, $5B valuation (October 2025). - [Superpowers](https://tekai.dev/catalog/superpowers): MIT-licensed cross-platform Agent Skills framework by Jesse Vincent (Prime Radiant) that enforces a seven-phase software development methodology — brainstorm, plan, TDD, subagent dispatch, code review, merge — across Claude Code, Codex, Cursor, Gemini CLI, and 6+ other coding agents. - [TanStack Form](https://tekai.dev/catalog/tanstack-form): Headless, type-safe form state management library for React, Vue, Angular, Solid, Svelte, and Lit — providing validation, async state, and granular re-render control without prescribing UI. - [TanStack Router](https://tekai.dev/catalog/tanstack-router): Fully type-safe client-side router for React (and Solid) with first-class search-parameter handling, nested layouts, and built-in data loading; positioned as a type-safe alternative to React Router. - [Tauri](https://tekai.dev/catalog/tauri): Open-source Rust-based framework for building cross-platform desktop and mobile applications using web frontends, producing binaries that are 96% smaller and use 50% less RAM than Electron equivalents; production-ready with Tauri 2.x supporting Windows, macOS, Linux, iOS, and Android. - [Trigger.dev](https://tekai.dev/catalog/trigger-dev): Open-source Apache 2.0 TypeScript background jobs and AI workflow platform with durable execution, no-timeout container-based runs, and a managed cloud offering; 14.6k+ GitHub stars, $20.3M raised. - [Warp](https://tekai.dev/catalog/warp): AI-native terminal and cloud agent platform used by 700k+ developers, combining a GPU-accelerated modern terminal with cloud-hosted autonomous coding agents (Oz) and enterprise-grade SSO and zero-data-retention controls. - [Windsurf](https://tekai.dev/catalog/windsurf): AI-native IDE (formerly Codeium) featuring the Cascade agentic AI engine with deep codebase understanding, multi-file edits, and terminal execution; acquired by Cognition AI for ~$250M in December 2025. - [Zilliz Cloud](https://tekai.dev/catalog/zilliz-cloud): Fully managed vector database service built on Milvus, operated by Zilliz with enterprise-grade SLA (99.95%), SOC 2 Type II, HIPAA-readiness, and a proprietary Cardinal search engine delivering performance improvements beyond open-source Milvus.