AEO Readiness Report: Are Websites Ready for AI Search?

AI search is no longer a novelty — it's a distribution channel. ChatGPT, Perplexity, Google AI Overviews, and Claude now field hundreds of millions of queries per day, synthesizing answers from web content instead of returning traditional link lists. The question is no longer whether AI will reshape organic traffic, but how ready your content is to be cited.

We used LLMSE's AEO analyzer to evaluate 25 major websites across news, tech, e-commerce, education, health, and government. The results were striking: not a single site scored above a C grade, and the average AEO score was just 42 out of 100.

The Dataset

We selected 25 globally recognized websites spanning seven industries:

Sector Sites Analyzed
News & Media NYTimes, BBC, Forbes, ESPN
Technology GitHub, Stack Overflow, Cloudflare, Microsoft, Apple
E-commerce Amazon, Shopify, Nike, Booking.com, Zillow
Education & Research Harvard, Nature, Coursera, Wikipedia
Health Healthline, WebMD
Government GOV.UK, IRS.gov
Social & Community Reddit, IMDb

Each site was evaluated against LLMSE's 10-metric AEO framework, which measures how easily AI systems can extract, attribute, and cite web content. Scores range from 0-100 with letter grades A through F.

Key Finding: Nobody Passes

The grade distribution tells the story:

Grade Count Share
A (85-100) 0 0%
B (70-84) 0 0%
C (55-69) 3 12%
D (40-54) 12 48%
F (0-39) 10 40%

Zero sites earned an A or B. The highest score in our sample was 59 (Harvard.edu), barely clearing the C threshold. The lowest was 22 (Amazon and Booking.com). The median score was 44 — solidly in D territory.

For context, the same sites generally perform well on traditional SEO metrics. This gap suggests that optimizing for AI search is fundamentally different from optimizing for Google's link-based results.

Scores by Industry

Sector averages reveal where different industries stand:

Sector Average Score Range
Education & Research 49.5 31-59
Marketing & SEO 49.0
News & Media 46.2 35-56
Government 46.5 45-48
Technology 45.8 34-53
Health 39.5 39-40
E-commerce 33.0 22-50
Social & Community 31.0 31-31

Education and research sites lead the pack — likely because academic content naturally includes citations, structured arguments, and clear definitions. E-commerce and social platforms trail significantly, with content optimized for browsing rather than extraction.

The Top Three Performers

Harvard.edu (59, Grade C) — Benefits from structured academic content, strong entity definitions, and authoritative sourcing. Still lacks FAQ schema, freshness signals, and direct answer formatting.

Nature.com (57, Grade C) — Scientific publishing with strong citation patterns and statistics. Schema markup and content freshness are weak points.

Forbes.com (56, Grade C) — Article-heavy content with some structured data and statistics. Loses points on schema completeness, entity clarity, and answer-first formatting.

The Bottom Three Performers

Amazon.com (22, Grade F) — Product-centric pages with minimal extractable text, no schema completeness, and virtually no Q&A patterns. The content is built for shoppers, not AI systems.

Booking.com (22, Grade F) — Similar to Amazon — search-and-filter interfaces with little AI-extractable content on the homepage.

Wikipedia.org (31, Grade F) — Surprising given Wikipedia's dominance in AI citations. The low score reflects the homepage analysis, which is a portal page with minimal substantive content. Individual articles would likely score much higher.

Where Websites Fail: The 10 AEO Metrics

Our framework evaluates ten dimensions of AI readiness. Here's how the 25 sites performed on each:

Metric Avg Score Max Possible Achievement
FAQ Schema 10.8 12 90%
HowTo Schema 7.0 8 88%
Topical Authority 4.0 6 67%
Direct Snippets 5.0 11 45%
Entity Clarity 3.6 9 40%
Statistics & Data 3.9 10 39%
Source Citations 4.5 13 35%
Content Freshness 1.4 6 24%
Schema Completeness 1.3 10 13%
Answer Format 1.4 15 10%

The pattern is clear. Sites do well on passive metrics — FAQ and HowTo schema score high because they receive neutral scores when no such content is expected. But the active metrics that require deliberate optimization — answer formatting, schema completeness, freshness signals, and source citations — show massive underinvestment.

Answer Format: The Biggest Gap

Answer format detection scored just 10% of its potential. This metric measures whether content uses Q&A patterns, question-based headings, and extractable answer blocks that AI systems can pull into responses. Only 7 of 25 sites had any meaningful Q&A structure.

This is the single highest-impact improvement most sites can make. According to Princeton's GEO research, content with answer-first structure receives 3.4x more AI citations.

Schema Completeness: Nearly Absent

Schema completeness — the presence of Organization, Article, and Author structured data — averaged just 13%. The most common issues:

  • 84% of sites lacked Author schema (Person with credentials)
  • 76% lacked Article schema (with dates, author, publisher)
  • 68% lacked Organization schema (identity, sameAs links)

These schemas are how AI systems identify who wrote content, when it was published, and which organization stands behind it. Without them, AI engines have fewer signals to attribute content correctly.

Content Freshness: A Blind Spot

Freshness signals — visible dates like "Last updated," copyright years, and "as of 2026" references — scored only 24%. A full 28% of sites had zero freshness signals in their visible content.

AI systems weigh content recency heavily. Perplexity's ranking algorithm uses a dedicated freshness layer, and Google AI Overviews favor recently updated pages. Yet most sites bury their dates in metadata rather than making them visible in the content.

The Most Common Issues

These are the AEO problems we found most frequently:

Issue Frequency
No glossary or definition blocks 84%
Missing Author schema 84%
No summary or extractable answer blocks 76%
Missing Article schema 76%
Few or no content freshness signals 72%
Missing Organization schema 68%
No extractable direct answers 68%
No source citations to authoritative references 60%

AI Agent Readiness: Even Earlier Days

Beyond AEO scoring, we checked three emerging AI-readiness signals:

  • llms.txt (machine-readable site summary for AI): Only 3 of 25 sites (12%) had one — Shopify, Reddit, and Coursera
  • llms-full.txt: None detected
  • WebMCP Tool Contracts: None detected

The llms.txt specification, proposed in late 2025, is still in early adoption. But its presence signals forward-thinking AI strategy. Shopify and Coursera, both developer-oriented platforms, are predictably ahead of the curve.

What This Means for Your Website

If the world's largest websites are scoring D's and F's on AEO readiness, smaller sites likely face similar or worse gaps. The good news: because adoption is so low, early optimization creates a significant competitive advantage.

Five High-Impact Actions

1. Add answer-first content blocks. Start important sections with a 40-60 word direct answer before expanding into details. AI engines extract these short blocks as citation candidates.

2. Implement schema trifecta: Organization + Article + Author. These three schema types together give AI systems the identity, attribution, and authority signals they need. Pages using 3+ schema types are roughly 13% more likely to be cited.

3. Add visible freshness signals. Include "Last updated: [date]" text on pages, reference current years in content, and keep copyright notices current. Don't rely on metadata alone — make dates visible in the body text.

4. Cite authoritative sources. Link to .edu, .gov, and established research when supporting claims. Princeton research shows that citing sources can boost AI visibility by 30-40%.

5. Include statistics and data points. Pages with specific numbers, percentages, and data-backed claims see 30-40% higher visibility in AI-generated answers.

How to Check Your Own AEO Score

LLMSE offers a free AEO analysis tool that evaluates any URL against the same framework used in this report. You'll get a detailed breakdown of all 10 metrics, specific issues to fix, and prioritized recommendations.

You can also use our comprehensive audit to check AEO alongside SEO, E-E-A-T, readability, accessibility, and brand safety — all in one scan.

Methodology

This report analyzed homepage content for each website as of February 24, 2026. AEO scores were generated using LLMSE's AEO analyzer (v1.5.17), which evaluates 10 metrics across answer format detection, schema markup, content structure, citation signals, data usage, freshness indicators, and topical authority.

Limitations: We analyzed homepages only. Individual article or product pages may score differently. Sites that returned HTTP errors (Tesla, Medium, Reuters, TripAdvisor) were excluded from the dataset.

The full AEO analysis methodology, including metric weights and scoring criteria, is documented on our AEO tool page.


This analysis was conducted using LLMSE, which has classified over 1.4 million websites across SEO, EEAT, WCAG accessibility, readability, and GARM brand safety dimensions. All data reflects the database as of February 2026. To analyze your own site, visit llmse.ai/classify.