Agentic Engine Optimization (AEO)
Addy Osmani April 14, 2026 opinion medium credibility
View source
Referenced in catalog
Agentic Engine Optimization (AEO)
Source: addyosmani.com | Author: Addy Osmani | Published: 2026-04-11 Category: opinion | Credibility: medium
Executive Summary
- Osmani defines AEO as a structured six-layer practice for making technical documentation consumable by AI coding agents, drawing a direct analogy to how SEO optimized for search crawlers.
- The article identifies key behavioral differences between agent and human HTTP access patterns — agents make 1–2 GET requests, skip analytics events, and are hard-constrained by context window limits (100K–200K tokens in practice).
- Concrete token targets are proposed (<15K for quick starts, <25K for API reference, <20K for conceptual guides), and a toolchain of emerging standards is recommended:
llms.txt,skill.md,AGENTS.md,agent-permissions.json, and a “Copy for AI” UI button.
Critical Analysis
Claim: “Agents compress multi-page human navigation into one or two HTTP requests”
- Evidence quality: anecdotal
- Assessment: Osmani references research identifying AI agent HTTP fingerprints (User-Agent strings for Claude Code, Cline, Cursor, Windsurf) as evidence that agents behave differently. This identification of fingerprints is verifiable and non-trivial. However, the behavioral claim — that agents universally make only 1–2 requests — lacks a cited controlled study. Different agents (browser-automation agents vs. API-calling agents) behave very differently.
- Counter-argument: Agentic behavior is not monolithic. Browser-use agents (like those using Playwright or Computer Use) do traverse pages like humans. Code agents using MCP servers may never make HTTP requests at all, querying documentation via structured tool calls instead. The “1–2 requests” characterization overgeneralizes.
- References:
Claim: “Token count is now a first-class documentation metric”
- Evidence quality: anecdotal
- Assessment: The Cisco example (REST API Quick Start at 193,217 tokens exceeding agent context windows) is a concrete, checkable data point that makes the argument tangible. The token targets proposed (15K/25K/20K) are reasonable heuristics, though they appear to be Osmani’s own estimates rather than derived from empirical research. The framing is correct in principle: as agents become primary documentation consumers, documentation teams must think in tokens.
- Counter-argument: Context windows are growing rapidly. Gemini 3 offers 1M-token context; Claude’s effective window is 200K+. Token budget pressure may diminish over 18–24 months. The “documentation debt” problem Osmani identifies is real today but the urgency could be front-running a problem that models will outgrow. Additionally, retrieval-augmented approaches (MCP servers, RAG pipelines) let agents query only relevant documentation chunks — making total token count less critical than chunk quality.
- References:
Claim: “llms.txt is an emerging standard agents will read”
- Evidence quality: vendor-sponsored
- Assessment: The article presents llms.txt as an actionable recommendation without adequately surfacing the credibility gap. As of early 2026, no major LLM provider has publicly committed to reading llms.txt files. Google explicitly stated it does not support it. Over 844K sites have implemented the file, but this reflects SEO-tool-driven anxiety rather than proven impact. The Mintlify and Anthropic implementations are real, but these are tool-vendors who benefit from the standard’s adoption.
- Counter-argument: Google’s Gary Illyes compared llms.txt to the abandoned
keywordsmeta tag. MCP server endpoints (providing structured, queryable documentation) likely deliver more reliable agent access than a static markdown file at domain root. A team investing in an MCP server for their API documentation gets guaranteed agent integration; llms.txt is a speculative bet on LLM providers choosing to read it. - References:
Claim: “AGENTS.md is an adopted standard (Cisco DevNet example cited)”
- Evidence quality: case-study
- Assessment: This is the strongest empirical claim in the article. AGENTS.md is now under Linux Foundation governance (Agentic AI Foundation), supported by Anthropic, OpenAI, Google, and AWS. The GitHub blog published an analysis of over 2,500 repositories using AGENTS.md. The Cisco DevNet adoption is a real, verifiable case study. This recommendation carries genuine weight.
- Counter-argument: AGENTS.md and CLAUDE.md serve overlapping purposes — teams maintaining both adds friction. The “one file, every agent” promise depends on agent tool vendors continuing to support the spec, which is not guaranteed for all 30+ listed tools. Proprietary agents may drift from the open standard.
- References:
Claim: “robots.txt misconfiguration silently blocks agent access”
- Evidence quality: anecdotal
- Assessment: Technically accurate — most AI web crawlers do respect robots.txt. The advice to audit for unintended agent blocks is sound. However, the article frames this as a novel discovery when it is well-documented crawler behavior. The more nuanced issue — that different agents use different User-Agent strings, making blanket rules unreliable — is not addressed.
- Counter-argument: Many documentation sites intentionally block AI crawlers for content scraping concerns (training data). The advice to open robots.txt to agents conflates agent-based documentation consumption with AI training scraping — two use cases with different business implications that documentation owners may want to handle separately.
- References:
Credibility Assessment
- Author background: Addy Osmani is a well-known engineering leader, formerly on the Chrome team at Google and currently Director at Google Cloud AI, focused on Gemini, Vertex AI, and ADK. He has strong credibility in developer experience and web performance. He has no apparent financial stake in the tools he recommends (llms.txt, AGENTS.md are open standards). His track record includes foundational web performance work (Lighthouse co-creator). His transition to Google Cloud AI means this article has some alignment with Google’s interests but does not appear to be Google-sponsored content.
- Publication bias: Personal blog (addyosmani.com). Independent, though Osmani works for Google. No advertisements, no sponsored content disclosure required. The recommendations include non-Google tools (Anthropic’s Claude Code patterns, non-Google standards). Moderate independence.
- Verdict: medium — The article is from a credible practitioner with real insight, but several headline recommendations (especially llms.txt) lack independent validation. The framing is prescriptive without quantified evidence for most claims. The article is best read as a well-informed opinion piece by a practitioner close to the problem, not as empirical research.
Entities Extracted
| Entity | Type | Catalog Entry |
|---|---|---|
| Agentic Engine Optimization (AEO) | pattern | link |
| llms.txt | pattern | link |
| AGENTS.md | pattern | link |
| Model Context Protocol (MCP) | open-source | link |
| Agent Skills Specification | open-source | link |