<nav>, <main>, <article>)Here's something we didn't expect to find: when we ran agent-bench across dozens of websites, the ones with the best accessibility practices consistently scored higher on agent-readiness. The overlap isn't a coincidence — it's structural.
Screen readers and AI agents face the same fundamental challenge: they can't see a webpage. They can't look at a button and know it's a button. They can't glance at a layout and understand the hierarchy. They need the page to describe itself — in structured, machine-readable ways.
Web accessibility (a11y) solves this for assistive technology. Semantic HTML, ARIA labels, structured headings, alt text, form labels — all of these exist to make content understandable without visual rendering. AI agents need exactly the same thing.
<nav>, <main>, <article>)The pattern is clear. Every row in that table is a case where solving for accessibility simultaneously solves for agent-readiness. The underlying need is identical: make the content and functionality self-describing.
When agent-bench scores a website, the structure check evaluates semantic HTML, heading hierarchy, ARIA landmarks, and metadata. Sites that invest in accessibility tend to nail these automatically.
Government sites are a good example. usa.gov, nasa.gov, and cdc.gov — all required to meet Section 508 accessibility standards — score well on structure. Their HTML is semantic, their headings are hierarchical, their forms are labeled. They weren't thinking about AI agents when they built this. They were complying with accessibility law. But the result is the same.
Contrast this with heavily JavaScript-rendered marketing sites. They might look beautiful, but strip away the visual rendering and you get an empty <div id="root"></div>. Bad for screen readers. Bad for agents. Bad for both, for the same reason.
The overlap is real, but it's not complete. Accessibility best practices get you maybe 40% of the way to being agent-ready. The remaining 60% requires things that accessibility doesn't address:
llms.txt — a machine-readable description of your site's purpose, content, and capabilities. Screen readers don't need this; agents do.X-RateLimit-Remaining headers let agents self-throttle.Accessibility is a necessary foundation, not a sufficient one. But it's a really good foundation.
Here's the pitch for anyone allocating engineering resources: every hour you spend on accessibility now pays double.
Today, accessibility improvements serve the ~15% of users who rely on assistive technology (plus everyone who benefits from better UX). Tomorrow — and increasingly today — those same improvements serve AI agents that are becoming a significant source of traffic and transactions.
Our robots.txt survey found that one in three popular websites already sees enough AI agent traffic to explicitly manage it. As agent traffic grows, the sites that are already machine-readable will capture that demand. The ones that aren't will lose it to competitors who are.
Seven things you can do today that improve both accessibility and agent-readiness:
<nav>, <main>, <article>, <aside>) instead of generic <div>s<h1>, sections with <h2>, subsections with <h3>alt text to all meaningful images<label> elements (not just placeholder text)role="navigation", role="main", role="search")
Then, to go from "accessible" to "agent-native," add: an API, an llms.txt file, structured data markup, and agent-layer middleware for rate limiting, discovery, and error handling.
The web accessibility movement spent decades arguing that building for assistive technology makes the web better for everyone. They were right — and more right than they knew. The same structural investments that serve people with disabilities now serve a new class of non-visual users: AI agents.
If you're already investing in accessibility, you're ahead of the curve on agent-readiness. If you're not, you now have twice the reason to start.
Want to know where your site stands? Run agent-bench and check your structure score. If it's low, your accessibility probably needs work too.