There's a new ranking that matters more than your Google position.
It's whether AI recommends your brand when someone asks "What's the best [your category] tool?"
We built AEO Scanner to answer that question with data. This week, we scanned 5 well-known SaaS brands across ChatGPT, Claude, and Gemini — 30 prompts per brand, 150 AI responses total.
The results were surprising. Here's what we found.
AI Visibility Scoreboard
We asked each AI engine questions like "What's the best e-commerce platform?", "Which CRM should a small business use?", and "What project management tool do developers prefer?"
Then we measured who got mentioned, how often, and in what position.
| Rank | Brand | Score | Mention Rate | Best Engine | Worst Engine | |------|-------|-------|-------------|-------------|--------------| | 1 | Shopify | 96/100 | 100% | All tied | — | | 2 | HubSpot | 94/100 | 100% | Gemini | Claude | | 3 | Calendly | 86/100 | 90% | Claude (80) | Gemini (48) | | 4 | Notion | 56/100 | 47% | ChatGPT (45) | Gemini (15) | | 5 | Linear | 41/100 | 20% | All tied | All tied |
Biggest mover: Notion (the surprise underperformer)
Notion is a $10B+ company. It has millions of users. And yet — AI barely recommends it.
Why? Notion's "everything tool" positioning works against it. When someone asks about note-taking, AI recommends note-taking specialists. When they ask about project management, AI recommends project management tools. Notion's breadth is its weakness in AI search.
The score: 56/100. Gemini mentions Notion in only 20% of relevant prompts.
Meanwhile, Shopify and HubSpot — brands that own a single category clearly — score 94-96. The lesson: category clarity wins in AI search.
What's your score? Run a free scan at runaeo.com →
Engine Watch
We track how each AI engine recommends brands differently. Here's what's happening right now:
ChatGPT: The consistent one
ChatGPT delivers the most predictable results. If a brand is well-established in its category, ChatGPT will mention it. It currently holds ~55-60% of AI chatbot referral traffic and behaves like a more decisive version of Google search — category leaders get recommended, challengers get ignored.
Claude: The generous recommender
Claude recommends the widest range of brands. Calendly scored 80/100 on Claude vs. 48/100 on Gemini — that's a 67% difference for the same product. Claude tends to surface niche and mid-market tools more often, making it the friendliest engine for challenger brands.
Gemini: The gatekeeper
Gemini recommends the fewest brands and favors household names heavily. It's the hardest engine to crack. Notion scored just 15/100 on Gemini while getting 45/100 on ChatGPT. If your brand isn't a category leader, Gemini might not know you exist.
Perplexity: The citation machine
Perplexity operates differently — it cites sources directly, behaving more like a research engine than a recommendation engine. It now holds 18-22% of AI referral traffic. Brands with strong content assets (comparison pages, research reports, data-driven posts) perform disproportionately well here.
The takeaway: Optimizing for one AI engine isn't enough. Engine variance is real and measurable.
AEO Tactic of the Week: Build Comparison Pages
This week's tactic is the single highest-impact move we've seen in our scan data.
The finding: Brands with "X vs Y" comparison pages on their site scored 23% higher on average across all AI engines.
Here's why it works:
-
AI engines love structured comparisons. When someone asks "What's better, Calendly or Acuity?", AI engines look for authoritative comparison content. If Calendly has a page comparing itself to Acuity, it gets cited. If it doesn't, a third-party blog gets the citation instead.
-
Comparison pages match how people prompt AI. The most common AI query pattern is "What's the best X?" followed by "How does X compare to Y?" — comparison pages directly answer the second question.
-
It compounds. Each comparison page creates a new surface for AI citation. Five comparison pages means five more chances to appear in AI responses.
How to execute this:
- Identify your top 3-5 competitors from your AEO Scanner results
- Create honest, data-driven comparison pages — "Your Brand vs. Competitor" format
- Include structured data (tables, feature grids, pricing comparisons)
- Don't be afraid to acknowledge competitor strengths — AI engines reward balanced, authoritative content over marketing fluff
- Update quarterly — stale comparisons lose AI trust
Brands like HubSpot and Shopify already do this at scale. HubSpot alone has 50+ "HubSpot vs." pages. That's not a coincidence — it's an AEO strategy.
Quick Hits
Google AI Overviews now appear on 25%+ of all searches, up from 13% a year ago. When your brand is cited in an AI Overview, organic CTR jumps 35%. Being AI-visible isn't just about chatbots — it's about Google too.
Brand mentions now matter more than backlinks for AI visibility. New data shows branded web mentions have a 0.664 correlation with AI Overview appearances, versus just 0.218 for backlinks. The implication: PR and brand-building may now outperform link-building for search visibility.
Zero-click searches hit 65-70% of all Google queries. Users get their answer from AI-generated summaries without clicking through. If you're not IN the AI answer, you don't exist.
Perplexity quietly cut Pro searches from 600/week to 200/week. Deep Research queries dropped from 50/month to 20. As AI search platforms tighten limits, the brands that appear in free-tier results become even more valuable.
What's Your AI Visibility Score?
Every week, we scan brands and publish the results here. But we built AEO Scanner so you can check your own brand too.
It takes 30 seconds. Enter your domain, and we'll scan it across ChatGPT, Claude, and Gemini — then show you exactly where you rank, who your AI competitors are, and what to fix.
Run your free scan at runaeo.com →
AEO Weekly is published every Tuesday by the team behind AEO Scanner — the first tool that measures how AI engines see your brand. Data-driven, not opinion-driven.