<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Tekai - Adopt Ring</title><description>Catalog entries in the Adopt ring of the Tekai technology radar.</description><link>https://tekai.dev/</link><language>en</language><item><title>CAP Theorem</title><link>https://tekai.dev/catalog/cap-theorem/</link><guid isPermaLink="true">https://tekai.dev/catalog/cap-theorem/</guid><description>Proven theorem: a distributed data store can guarantee only two of three properties — Consistency, Availability, Partition Tolerance. Since partition tolerance is always required in practice, the true design trade-off is C vs. A during network partitions.</description><pubDate>Wed, 22 Apr 2026 00:00:00 GMT</pubDate><category>distributed-systems</category><category>consistency</category><category>availability</category><category>architecture</category><category>databases</category><category>software-engineering-principles</category></item><item><title>Conway&apos;s Law</title><link>https://tekai.dev/catalog/conways-law/</link><guid isPermaLink="true">https://tekai.dev/catalog/conways-law/</guid><description>Empirically supported organizational principle stating that software systems inevitably mirror the communication structure of the teams that build them; the inverse maneuver (restructuring teams to achieve a target architecture) is widely used in microservices and platform engineering.</description><pubDate>Wed, 22 Apr 2026 00:00:00 GMT</pubDate><category>organizational-design</category><category>architecture</category><category>team-topology</category><category>microservices</category><category>software-engineering-principles</category><category>distributed-systems</category></item><item><title>Software Engineering Principles (Collection)</title><link>https://tekai.dev/catalog/software-engineering-principles/</link><guid isPermaLink="true">https://tekai.dev/catalog/software-engineering-principles/</guid><description>The canonical collection of named software engineering laws, heuristics, and principles — from Brooks&apos;s Law and Conway&apos;s Law to YAGNI, DRY, Hyrum&apos;s Law, and the Testing Pyramid — that form the shared vocabulary of software practitioners for reasoning about complexity, quality, and team dynamics.</description><pubDate>Wed, 22 Apr 2026 00:00:00 GMT</pubDate><category>software-engineering</category><category>methodology</category><category>architecture</category><category>teams</category><category>quality</category><category>design</category><category>principles</category><category>distributed-systems</category><category>engineering-management</category></item><item><title>Technical Debt</title><link>https://tekai.dev/catalog/technical-debt/</link><guid isPermaLink="true">https://tekai.dev/catalog/technical-debt/</guid><description>Ward Cunningham&apos;s 1992 financial metaphor for the cost accumulated when expedient code shortcuts trade short-term delivery speed for long-term maintenance burden; the concept has expanded into a multi-dimensional framework covering code, design, architecture, test, and documentation debt.</description><pubDate>Wed, 22 Apr 2026 00:00:00 GMT</pubDate><category>software-quality</category><category>refactoring</category><category>architecture</category><category>engineering-management</category><category>software-engineering-principles</category><category>maintainability</category></item><item><title>Hugging Face Transformers</title><link>https://tekai.dev/catalog/huggingface-transformers/</link><guid isPermaLink="true">https://tekai.dev/catalog/huggingface-transformers/</guid><description>The de facto standard Python library for accessing, fine-tuning, and deploying transformer-based models across NLP, vision, audio, and multimodal tasks, with unified APIs for 500,000+ pretrained models on Hugging Face Hub.</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><category>transformers</category><category>nlp</category><category>llm</category><category>fine-tuning</category><category>pytorch</category><category>huggingface</category><category>pretrained-models</category><category>machine-learning</category><category>model-hub</category></item><item><title>TanStack Query</title><link>https://tekai.dev/catalog/tanstack-query/</link><guid isPermaLink="true">https://tekai.dev/catalog/tanstack-query/</guid><description>Async server-state management and data-fetching library for React (and other frameworks) with automatic caching, background refresh, and optimistic updates; ~12–16M weekly npm downloads.</description><pubDate>Wed, 15 Apr 2026 00:00:00 GMT</pubDate><category>react</category><category>data-fetching</category><category>server-state</category><category>caching</category><category>typescript</category><category>async-state</category><category>react-hooks</category></item><item><title>TanStack Table</title><link>https://tekai.dev/catalog/tanstack-table/</link><guid isPermaLink="true">https://tekai.dev/catalog/tanstack-table/</guid><description>Headless, framework-agnostic table and data-grid library providing sorting, filtering, pagination, and virtualization logic without any UI — you own the markup and styles.</description><pubDate>Wed, 15 Apr 2026 00:00:00 GMT</pubDate><category>react</category><category>table</category><category>data-grid</category><category>headless-ui</category><category>typescript</category><category>virtualization</category><category>sorting</category><category>filtering</category></item><item><title>Aider</title><link>https://tekai.dev/catalog/aider/</link><guid isPermaLink="true">https://tekai.dev/catalog/aider/</guid><description>Open-source terminal AI coding agent that uses a tree-sitter repo map and multi-mode diff engine to pair-program with LLMs across 100+ languages, with first-class git integration and support for virtually every LLM provider.</description><pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate><category>ai-coding-agent</category><category>cli</category><category>git</category><category>pair-programming</category><category>multi-model</category><category>repomap</category><category>tree-sitter</category><category>diff</category><category>python</category><category>litellm</category><category>terminal</category></item><item><title>Agent Skills Specification</title><link>https://tekai.dev/catalog/agent-skills-specification/</link><guid isPermaLink="true">https://tekai.dev/catalog/agent-skills-specification/</guid><description>An open standard for packaging reusable procedural knowledge as markdown files that AI coding agents can discover, load, and use across 30+ tools.</description><pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate><category>ai-agents</category><category>open-standard</category><category>coding-agents</category><category>developer-tools</category><category>anthropic</category><category>specification</category><category>context-engineering</category></item><item><title>Anthropic</title><link>https://tekai.dev/catalog/anthropic/</link><guid isPermaLink="true">https://tekai.dev/catalog/anthropic/</guid><description>AI safety company behind the Claude model family — including Claude Opus, Sonnet, Haiku, and the restricted Claude Mythos Preview — with $380B valuation, $14B ARR, and Constitutional AI as its core alignment technique.</description><pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate><category>llm</category><category>ai-safety</category><category>frontier-models</category><category>claude</category><category>constitutional-ai</category><category>api</category></item><item><title>Google DeepMind</title><link>https://tekai.dev/catalog/google-deepmind/</link><guid isPermaLink="true">https://tekai.dev/catalog/google-deepmind/</guid><description>Google&apos;s combined AI research and products division behind the Gemini model family, with Gemini 3.1 Pro ranking #1 on 12 of 18 tracked benchmarks in 2026 and 1M-token context windows available via Gemini API and Google Cloud Vertex AI.</description><pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate><category>llm</category><category>frontier-models</category><category>gemini</category><category>multimodal</category><category>api</category><category>google-cloud</category><category>long-context</category><category>reasoning-models</category></item><item><title>Mechanical Sympathy</title><link>https://tekai.dev/catalog/mechanical-sympathy/</link><guid isPermaLink="true">https://tekai.dev/catalog/mechanical-sympathy/</guid><description>A software design philosophy, coined by Martin Thompson from motorsport, that aligns program behavior with underlying hardware constraints — CPU cache hierarchy, memory access patterns, and concurrency primitives — to achieve lower latency and higher throughput without additional hardware.</description><pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate><category>performance</category><category>low-latency</category><category>cpu-cache</category><category>hardware-aware</category><category>concurrency</category><category>false-sharing</category><category>memory-access</category><category>high-throughput</category></item><item><title>OpenAI</title><link>https://tekai.dev/catalog/openai/</link><guid isPermaLink="true">https://tekai.dev/catalog/openai/</guid><description>Frontier AI lab behind GPT-5, o3, DALL-E, Sora, and Whisper, operating ChatGPT (the world&apos;s leading AI consumer product) alongside an enterprise API platform with $20B+ annual revenue and an $852B valuation.</description><pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate><category>llm</category><category>frontier-models</category><category>gpt</category><category>multimodal</category><category>api</category><category>chatgpt</category><category>reasoning-models</category></item><item><title>Tree-sitter</title><link>https://tekai.dev/catalog/tree-sitter/</link><guid isPermaLink="true">https://tekai.dev/catalog/tree-sitter/</guid><description>Incremental parser generator and parsing library that builds concrete syntax trees for source files and updates them efficiently on edit, supporting 100+ programming languages and used by Neovim, GitHub, and AI coding tools.</description><pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate><category>parsing</category><category>ast</category><category>incremental-parsing</category><category>code-analysis</category><category>syntax-highlighting</category><category>static-analysis</category><category>wasm</category></item><item><title>Retrieval-Augmented Generation (RAG)</title><link>https://tekai.dev/catalog/retrieval-augmented-generation/</link><guid isPermaLink="true">https://tekai.dev/catalog/retrieval-augmented-generation/</guid><description>An LLM inference pattern that injects relevant documents retrieved from an external corpus into the model&apos;s context at query time, grounding responses in up-to-date or domain-specific information without retraining.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>rag</category><category>llm</category><category>knowledge-retrieval</category><category>vector-search</category><category>embeddings</category><category>knowledge-base</category><category>grounding</category></item><item><title>vLLM</title><link>https://tekai.dev/catalog/vllm/</link><guid isPermaLink="true">https://tekai.dev/catalog/vllm/</guid><description>High-throughput open-source LLM inference and serving engine using PagedAttention for memory-efficient KV cache management, achieving 2–24x throughput improvements over naive serving approaches.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>llm</category><category>inference</category><category>serving</category><category>pagedattention</category><category>gpu</category><category>python</category><category>open-source</category></item></channel></rss>