The AEO Tool Landscape: What Exists, What's Missing, and How to Build Your Stack
A practical guide to AEO tools in 2026 — five categories, honest assessments, self-build options, and a recommended stack.
The AEO tool market is growing fast, but most of it is focused on one thing — tracking how often AI mentions your brand. That's a valid category, but it's not the whole picture. A complete AEO stack covers technical accessibility, content quality, prompt visibility, and the SEO foundation you already have. Here's what exists, what each category actually does, and how to put together a stack without overspending.
The Short Answer
You probably need fewer tools than you think. A solid AEO stack for most teams: Screaming Frog (general crawlability), Compass (AI-specific navigation testing), your existing SEO platform (Ahrefs or Semrush), and optionally a prompt visibility tracker — or build one yourself. The most important layer is technical accessibility. Everything else is secondary.
Five Categories of AEO Tools
The AEO tool landscape is more fragmented than it first appears. Most listicles treat all AEO tools as interchangeable, but they're not. We categorise them into five distinct types:
- Prompt visibility trackers — Track how often your brand appears in AI responses. The biggest and most funded category.
- SEO platforms with AI features — Your existing SEO tools, now adding AI metrics.
- On-site content optimisation for AI — Tools that score your content for AI-readiness.
- Technical accessibility testing — Tools that test whether AI agents can actually access and navigate your site.
- Self-build — Building your own tools with code and APIs.
Most of the market and most of the listicles focus on category 1. We'd argue category 4 is where you should start. If AI can't navigate to your content, visibility tracking tells you nothing about the actual problem.
Prompt Visibility Trackers
What they do: Track how often your brand appears when people ask AI specific questions. Monitor across ChatGPT, Perplexity, Gemini, Google AI Overviews. Some offer competitive benchmarking, sentiment analysis, and content recommendations.
Key tools: Profound (most well-known, has prompt tracking + content optimisation + workflow features), Otterly (focused tracker, pricing from $29/month for 15 prompts to $989+ for 1,000), Peec AI (multilingual monitoring across 8+ platforms, 115+ languages), AIclicks (prompt discovery + competitive benchmarking), Rankscale, HubSpot AEO Grader.
Our position: We've argued extensively that prompt visibility metrics are fundamentally noisy — LLMs are non-deterministic, meaning running the same prompt 100 times gives different results. Position in AI responses is meaningless; frequency requires 60-100 runs per prompt to be statistically valid. Most tools don't run at that volume. We made this argument with 19 sources in our blog post AI Is Not a Performance Channel. If you disagree, read it and decide for yourself.
That said: These tools look good and do interesting things. If you want prompt visibility tracking, Profound is probably the most complete option. But we'd put this category as optional, not essential. The self-build option (see below) can get you 80-90% of the insight.
SEO Platforms with AI Features
What they do: Ahrefs and Semrush are adding AI visibility metrics into their existing platforms. Ahrefs recently rolled out prompt tracking across paid plans. Semrush has AI visibility reporting.
The advantage: You probably already have one of these. Adding AI tracking to your existing SEO workflow is lower friction than buying a separate tool. The AI features aren't as deep as dedicated tools like Profound, but they cover the basics.
Our take: If you're doing both SEO and AEO (which you should be), your existing platform is a sensible place to add the AI layer. You don't need a separate tool for what amounts to additional reporting in a platform you already use.
On-Site Content Optimisation for AI
What exists: Profound has a content optimisation feature with an "AEO Content Score" — evaluates semantic alignment, structured data, heading density, query fanout patterns, freshness. Surfer SEO claims some AEO capabilities. These are essentially the Clearscope/Surfer model (score your content against signals from high-performing pages) applied to AI citations.
The gap: These tools measure correlation — "pages that get cited by AI tend to look like this." They don't measure whether AI can actually extract meaningful information from your pages. Nobody currently tests content extractability at the chunk level — whether an LLM retrieves accurate information from your content when it processes it. That's a genuine gap in the market. It's also what Wayfinder is building next.
Our take: Content scoring tools can be useful for identifying structural issues, but they're measuring proxies, not outcomes. Use them if you have them, but don't treat the score as definitive.
Technical Accessibility Testing
The category we care most about. This is where Compass lives.
What it means: Can AI agents actually access, navigate, and extract information from your site? This is the foundation — if AI can't reach your content, nothing else matters. For a deeper look at what AI navigability involves, see our glossary entry.
Screaming Frog: The industry-standard crawlability tool. Not AI-specific, but it tells you a huge amount about what any bot can see. JavaScript rendering comparison (raw HTML vs rendered DOM), crawl depth analysis, link structure mapping. Everyone already has it. Use it. It won't catch AI-specific issues like navigation loops or semantic link confusion, but it covers the technical foundation. Free version available.
Compass (Wayfinder): Purpose-built for AI navigation testing. Simulates how AI agents actually navigate your site — tests entry strategies, measures click depth, identifies loop traps, maps where agents succeed and fail. Based on the navigation patterns from our 3,348-task research across 269 websites. This is the AI-specific layer on top of what Screaming Frog gives you. If Screaming Frog tells you "can a bot crawl this page," Compass tells you "can an AI agent find this page by navigating your site." For more on what this testing reveals, see our guide on how AI agents navigate websites.
LibreCrawl and DIY options: Open-source crawling libraries exist (LibreCrawl, Scrapy, various Python libraries). You could build a basic version of Screaming Frog's functionality in a weekend. You could also automate the DIY audit guide process — send an AI agent to your site, record what it finds, score the results. The self-build version of Compass is possible but takes weeks of engineering plus ML work to make it fast, reliable, and scalable. You could build a basic version that automates the manual audit; making it production-grade is where the investment lies.
The Self-Build Option
What you can build yourself:
- Prompt visibility tracker: Query the ChatGPT, Claude, Gemini, and Perplexity APIs with your target prompts. Parse responses for brand mentions. Store results. Run 60-100 times per prompt for statistical validity. Structure the output into a dashboard. Achievable in a focused weekend with Claude Code. Cost: API fees only.
- Basic crawlability testing: Python libraries (Scrapy, BeautifulSoup, LibreCrawl) can crawl your site and compare raw HTML to rendered content. Not as polished as Screaming Frog but gets you the core data.
- Manual AI navigation testing: Follow the DIY audit guide — send AI agents to your site with specific tasks, record results, score them. Time-consuming but free.
What's harder to build yourself:
- Anything that requires ML models (content scoring, navigation prediction)
- Anything that needs to run at scale reliably (automated navigation testing across hundreds of pages)
- Anything where the UX matters for non-technical team members
Our honest take: Nowadays, pretending you can't build basic versions of most tools is silly. If you've got someone technical and a couple of weekends, you can cover a lot of ground for free. The question is whether the time investment is worth it versus buying a tool. For prompt visibility, self-build makes a lot of sense — the tools are measuring something we think is noisy anyway, so paying thousands a month for a slick dashboard around noise is hard to justify. For technical accessibility, the tools exist because the problem is harder — but the basic version is still buildable. For a framework on which AEO metrics actually matter, see our guide.
Recommended Stack
For most teams doing SEO + AEO:
- Screaming Frog (general crawlability, you already have it) — Free/£199 per year
- Compass (AI-specific navigation testing) — tests what Screaming Frog can't
- Ahrefs or Semrush (SEO platform with AI features, you already have it) — existing subscription
- Optional: prompt visibility tracker — either a dedicated tool like Profound if you want the polish, or self-built with API calls if you want the economics
If budget is tight:
- Screaming Frog free version + manual AI audit (follow the guide) + self-built prompt tracker
- Total cost: approximately zero, total time: a few weekends
What to prioritise:
Start with technical accessibility (can AI find your content?). That's the foundation. Then content quality (is the content good?). Prompt visibility is last — it tells you the least and costs the most. Once you know your technical foundations are sound, use the technical AEO checklist to address any gaps.
We built Compass because technical accessibility is the foundation — and nothing else tested it. See how AI agents navigate your site with Wayfinder Compass.