The fundamental difference between crawlers and AI agents, and why it matters
Most websites are built and structured in a way that is optimised for access by a search engine crawler — a program that visits a site, finds every link, and clicks through them systematically. It "crawls" all available pages and stores them in an index.
This means most pages on any given site will be available for discovery when someone searches for related topics. The work of SEO has been to ensure they are available.
But this mental model doesn't quite match how AI agents work.
AI agents — tools like ChatGPT, Claude, Perplexity, and the emerging wave of autonomous agents — don't systematically crawl your site. For the most part, they don't navigate at all. They rarely invoke direct URL access tools, and when they do, if they don't find the answer on the first try, they move on.
When they do actually navigate (as they increasingly do with browser integrations like Claude in Chrome, Gemini, OpenAI Operator, and a growing range of browser-enabled AI tooling), they navigate with intent. They follow a goal, making decisions about where to go next based on what they see.
Think of it like this:
Crawlers are like census takers. They visit every door, record everything, and build a comprehensive map.
AI agents are like impatient users. They land on a page, scan for clues about where to go, make a decision, and move on. If they can't find what they need in a few steps, they give up.
Sites that work perfectly for Google might completely fail for AI agents. Here's why:
AI agents don't have infinite patience. If your pricing page is four clicks deep through nested menus, an AI is likely to fail the task before it gets there. The same content that Google can find and index may be effectively invisible to agents.
Crawlers follow links regardless of what they're called. AI agents make semantic decisions. A link that says "Solutions" might be perfectly clear to humans, but an AI looking for "pricing" might skip it entirely because the connection isn't obvious.
Many AI agents see a simplified version of your page — similar to what you'd get with JavaScript disabled. Content that loads dynamically, tabs that require clicks to reveal, or menus that expand on hover may simply not exist from the agent's perspective.
Crawlers index pages individually. AI agents try to understand the relationship between pages to decide if they're getting closer to their goal. If your site's structure doesn't make logical sense, the AI's decision-making breaks down.
AI agents currently use two main strategies to find information. Compass can test both.
This is how browser-based AI tools work — Claude in Chrome, Gemini, OpenAI Operator, and the growing range of browser-enabled AI assistants. The AI lands on your homepage and browses, clicking through menus, following links, reading content to understand where things are.
What it tests: Your site's information architecture. Can an AI navigate from A to B using your menus and links?
Common failures: Buried content, confusing navigation, unclear labels, JavaScript-dependent menus.
This is how most AI assistants work today — including ChatGPT with browsing, Perplexity, and Claude with web search. The AI searches the web first, picks the most promising result, then visits your site to verify the content matches what it was looking for.
What it tests: Both your search visibility AND your page clarity. Can AI find you through search, and then confirm your page has the information it needs?
Common failures: Not appearing in relevant searches, landing pages that don't clearly answer the query, content that's there but hard to extract.
You can't control how AI agents work. But you can control how your site works for them:
Want to understand how Compass simulates AI navigation? See the Methodology guide.