Methodology

22 checks across 7 categories

Powered by AFDocs — every run fetches docs directly, applies the WriteChoice validation workflow, and deterministically scores the agent experience across discovery, delivery, structure, freshness, and access.

Sample10 linksPacing100msTimeout15sModeDeterministic
06

llms.txt and Discovery

Verifies whether agents can discover your documentation index, follow the links it exposes, and find llms.txt from the pages you publish.

02

Markdown Delivery

Checks whether the site offers clean markdown through .md URLs or content negotiation instead of forcing agents through bloated HTML.

04

Page Size

Measures whether pages fit within agent context windows and whether the useful content starts early enough to avoid truncation.

03

Content Structure

Evaluates tabs, headers, and code fences to ensure the content remains parseable after an agent converts or serializes the page.

02

URL Stability

Confirms that documentation URLs resolve cleanly, return correct status codes, and avoid redirect behavior that can confuse crawlers and agents.

03

Observability

Compares freshness and parity signals so agent-facing content stays accurate across llms.txt, markdown output, and cached HTML responses.

02

Authentication

Tests whether agents can access the docs without hitting auth walls, or whether alternate public paths exist when the main site is gated.