I ran 3,348 AI navigation tasks across 269 websites. Here's what actually determines whether AI agents find your content.
This post summarises key findings from the full research paper. Download PDF
Your content exists. It ranks. But can AI actually get there?
You've spent years getting your pages to rank. Good. But here's the problem: AI agents don't use Google the way that a human does.
I ran 3,348 navigation tasks across 269 websites to understand how AI agents actually find content. The results challenge pretty much everything I originally assumed about "AI-optimised" content.
Let's start with what works. AI agents successfully navigated to target content 78.6% of the time across the entire dataset. That's actually pretty good, considering how my more anecdotal ad-hoc efforts getting Claude to find me the right size shoes went.
But that headline number hides some nasty surprises.
This one surprised me.
First, some context: I tested a "search-first" approach where the AI searches Google and clicks the first result regardless of domain. So if you search "Stripe pricing" and some comparison blog ranks above stripe.com, the agent goes there instead. This mirrors how many real AI browsing tools actually work — they trust Google to surface the right page.
The results were bimodal. Search-first either works instantly or fails badly:
Search shortcuts work brilliantly (90%). Search + navigation is brutal (27%). Homepage navigation sits reliably in between.
When search shortcuts to the answer — and it often does, about 59% of the time — it's magic. The AI lands, finds what it needs, done. 90% success rate on those attempts.
But when search doesn't nail it? The agent is now navigating from whatever page Google served up, which might be a blog post, a competitor's site, or a lead-capture landing page. Success collapses to 27%.
Starting from the homepage — boring, reliable homepage navigation — has nearly 3x better odds than search-first when navigation is actually required.
Why does search fail when it fails?
Google optimises for engagement, not task completion. Your top-ranking page for "pricing" might be a blog post about pricing, not your actual pricing page. For a human, no big deal — they'll click around and figure it out. For an AI agent with limited patience and no backtracking? It's now starting from the wrong page entirely, with a 27% chance of recovery.
91% of successful navigation completes within two clicks. After that, success rates fall below 30%.
Success drops sharply after 2 clicks. Beyond that, failure becomes more likely than success.
This isn't surprising if you think about it. Each click has roughly a 70% chance of being correct. Compound that over three clicks and you're at ~34% cumulative success. My data matches this almost exactly.
What this means for your site
If your pricing page is three clicks from the homepage, AI agents will struggle to find it. Same for any critical content buried in nested navigation. The fix is obvious: flatten your important content. Pricing, contact, key product categories — one click from anywhere is the goal.
Here's a weird one.
AI agents click links in the top 50 DOM positions 80% of the time. Yet success rates are essentially flat across all positions — around 68-74% whether the link is at position 10 or position 200.
AI clicks early links heavily (bars), but position doesn't predict success (line). The paradox: position bias is learned behaviour, not optimisation.
The AI has learned to scan top-to-bottom, like a human would. But unlike a human, it lacks the contextual reasoning to know when a lower-positioned link is actually the better choice.
Where do clicks land? 80% target navigation elements (header, footer, sidebars). The AI treats nav as the discovery mechanism and largely ignores body content links.
This makes sense if you think about it from a training perspective: if the current page isn't the answer, body text is about this page while navigation is designed to help you find other pages. The AI has figured out that nav elements are the escape route.
Bottom line
Where you place a link matters more than what you call it. Footer links are basically invisible to AI agents. If content matters, put it in the header navigation. And while in-copy links are awesome for SEO, an AI agent largely couldn't care less about them.
95% of failed traces involve the agent revisiting a page it's already seen.
When AI navigation fails, it's almost always because the agent got stuck in a loop.
The AI doesn't run out of options — failed traces have a median of 139 links available on the first page. It just gets stuck circling.
The typical failure sequence:
One contributor: 65% of links on a typical page share a URL with at least one other link. The AI might think it's clicking something new ("Products" vs "Our Solutions") but both link to the same page.
16 percentage points between top and bottom performers. When you normalise for task difficulty, the gap widens further.
What's going on:
Financial Services performs most consistently. Turns out those regulatory requirements that force plain language and logical structure? They work for machines too.
Enterprise/B2B collapses on hard tasks — from 78% on easy tasks to just 30% on hard ones. "Solutions-speak" navigation labels ("Platform", "Capabilities", "Transform Your Business") confuse AI agents that rely on semantic matching.
E-commerce is remarkably stable. Sites built for browsing and discovery? That investment pays off for AI navigation too. Clean hierarchical information architecture with clear labelling is LLM catnip.
Pharma/Healthcare struggles mid-complexity. Compliance infrastructure (disclaimers, interstitials, geographic restrictions) trips up the AI.
Key Results
The web has entered a third dimension.
Dimension One was content existing. You built a page; people found it through links and word of mouth.
Dimension Two was demand matching. Search engines connected content to queries. SEO emerged, tools like Ahrefs and SEMrush spent two decades optimising the match.
Dimension Three is accessibility. Content needs to exist, rank well, AND be reachable by AI agents. They navigate, click, render JavaScript, and decide where to go next.
A page can rank #1 and be completely invisible to AI agents. This research began before Claude's browsing, Gemini's browser integration, and viral open-source projects like OpenClaw became widely available. I was predicting a theoretical future that has now arrived.
This research was conducted using proprietary navigation trace data. If you want to test how AI agents navigate your site, that's literally what Wayfinder Compass does.
This post covers the highlights, but there's more: detailed methodology, complete data tables, failure pattern analysis, and all 11 visualisations.