About this research: Arcalea's AEO Industry Index measures how visible brands are inside AI-generated responses across ChatGPT, Gemini, Perplexity, and Claude. Over a single week in March 2026, we ran the full pipeline across five industries: commercial debt collection, commercial plumbing distribution, wholesale electrical supply, M7 business schools, and egg donation & surrogacy agencies, collecting over 1,200 AI responses, scoring 62 brands, and identifying structural patterns across all five data sets.
Definition
AEO Industry Index: A composite brand visibility measurement system that tracks how often and how prominently companies appear inside AI-generated responses across ChatGPT, Gemini, Perplexity, and Claude. The index uses five weighted metrics, Entity Mention Frequency, AI Share of Voice, Position Power Score, Recommendation Rate, and Platform Consistency, to score and rank brands by their standing in the AI consideration set for a given industry.
Most AEO research focuses on a single industry or a single platform. You get a snapshot. You don't get patterns.
We wanted patterns. So we ran our AEO Industry Index across five distinct categories in a single week: commercial debt collection, commercial plumbing distribution, wholesale electrical supply, M7 business schools, and egg donation and surrogacy agencies. Industries that share almost nothing in common except one thing: they all have customers who ask AI systems for recommendations before making a decision.
The specific rankings differ by industry. The underlying mechanics don't. Across all five data sets, the same six dynamics show up. Understanding them is worth your time whether you're in one of these industries or not, because the patterns are structural, not coincidental.
In traditional search, the first page has ten results. In AI responses, the answer has three to five brands, sometimes fewer. Our data shows a stark pattern: the top-ranked entity in each industry captures a disproportionate share of total AI mentions, while the long tail of brands in each category receives near-zero visibility.
This compression means that being "in the consideration set" is a binary outcome in AI: you're either regularly mentioned or you're essentially invisible. There's no equivalent of ranking 7th and still getting occasional clicks. The implication for brands: the goal isn't incremental visibility improvement, it's crossing the threshold into consistent mention territory.
In commercial debt collection, the #1 entity captured over 58% of all AI Share of Voice across our 62-prompt data set. The #2 entity captured 19%. Everyone else split the remaining 23%. That kind of concentration exists across every industry we measured.
Recommendation Rate measures how often an entity appears in response to prompts that include an explicit recommendation request ("Who should I use for X?" vs. "Tell me about X companies"). The gap between mention rate and recommendation rate is one of the most telling metrics in our data.
Known by AI, but not trusted to recommend. Often brands with broad content presence but weak third-party signals or inconsistent positioning.
Trusted to recommend. Typically have strong peer review presence, independent editorial citations, and structured data signals that LLMs draw from.
In the M7 business schools data set, all seven schools had high mention rates; they're well-known. But recommendation rate varied meaningfully based on how AI systems weighted program prestige signals, rankings citations, and peer-review mentions. In egg donation and surrogacy, a category with more fragmented brand recognition, the recommendation rate gap between the top and mid-tier agencies was over 40 percentage points.
If you're not getting recommended when someone asks AI "who should I use?", you have a trust signal problem, not a visibility problem. Those require different fixes.
Platform Consistency, measured using a coefficient of variation across ChatGPT, Gemini, Perplexity, and Claude, shows significant variance in every industry. The brand that ranks #1 on ChatGPT is rarely the same brand that ranks #1 on Gemini. This isn't a minor inconsistency. In several industries, the top entity on one platform doesn't appear in the top three on another.
Why platforms diverge
The practical implication: a cross-platform AEO strategy needs to build citation depth across multiple source types, not just one. (For a complete framework on how authority signals stack across channels, see our Architecture of Authority piece.) Brands that rank strongly on all four platforms in our data consistently have diverse citation profiles: industry directories, independent editorial coverage, structured data in aggregators, and peer review signals.
Position Power Score weights each mention by its position in the AI response; first mentions score higher than later mentions in the same response. Across all five industries, the gap between first-position scoring and second-position scoring is larger than the gap between second and fifth.
This mirrors findings from search click-through rate research, but the effect appears more pronounced in AI responses. When an AI system names a brand first in a recommendation list, that entity tends to be named first 70–80% of the time across repeated runs of the same prompt. The ordering is not random; it reflects a genuine confidence hierarchy within the model's output.
The ordering effect
Brands that consistently appear first in AI responses tend to have one thing in common: they're the entity that authoritative third-party sources reference first when listing options in the category. AI systems appear to mirror the implicit hierarchy embedded in their training sources. Earning the first-position citation in industry publications, association resources, and editorial roundups compounds into first-position mentions in AI outputs.
For brands currently ranking second or third in AI responses, the path to first position runs through the same sources the AI is drawing from. Getting to #1 on Perplexity requires understanding which sources Perplexity is citing and whether your brand is appearing first, or at all, in those sources.
The industries in our data set vary significantly in decision stakes. Choosing a business school or an egg donation agency is a high-stakes, low-frequency decision. Choosing a wholesale electrical supplier is a considered but more transactional decision. Commercial debt collection sits somewhere in the middle.
The data shows a clear pattern: higher-stakes categories produce AI responses that name fewer brands, but name them with greater confidence and consistency. In M7 business schools, AI responses regularly named all seven schools; the universe is well-defined. In egg donation and surrogacy, AI responses typically named three to five agencies, even though there are hundreds of agencies operating in the market.
Broader mention sets, more platforms mentioning more brands, higher variance in which brands appear per prompt.
Tighter shortlists, stronger trust signal requirements, more concentrated AI Share of Voice at the top.
The pattern makes intuitive sense: AI systems appear to be more selective when the underlying decision is serious. They draw from sources carrying the strongest trust signals and produce shorter, more focused outputs. For brands in high-stakes categories, the question isn't "how do we get mentioned?" It's "how do we earn a spot on a three-brand shortlist?" The answer runs through peer reviews, independent editorial coverage, third-party citations, and consistency over time. None of it is fast. All of it is durable.
Our composite AEO score combines five metrics: Entity Mention Frequency (25%), AI Share of Voice (25%), Position Power Score (20%), Recommendation Rate (15%), and Platform Consistency (15%). Across all five industries, the gap between the #1 and #2 composite scores is not a marginal difference; it's a structural lead.
In three of five industries, the #1 entity's composite score is more than 2× the #2 entity's score. In one industry (commercial plumbing distribution), the leader's composite score is nearly 3× the runner-up. This isn't just about mention frequency; it's about the compounding effect of being consistently first-positioned, across multiple platforms, with high recommendation rates.
The compounding dynamic: Brands with strong composite scores didn't get there through a single tactic. They have high mention frequency AND strong positioning AND recommendation trust AND cross-platform consistency. Each metric reinforces the others. A brand that gets recommended often tends to appear first, and brands that appear first tend to get recommended more often. The lead compounds.
The six patterns above aren't abstractions. Here's what they mean depending on where your brand currently sits:
AI rankings aren't permanent. As LLMs update and retrain, citation patterns shift. Brands that maintain active publication programs, earn ongoing third-party mentions, and build structured data assets will sustain their positions. Brands that go quiet will see erosion over the next 12 to 24 months. The visibility you have today was built by content and citations from 12 to 36 months ago. What you do now determines your position in 2027.
The entry point isn't advertising. It's becoming citable. Publish authoritative content that matches how your audience phrases questions to AI. Earn third-party coverage in publications LLMs draw from. Build consistent brand mentions across directories, associations, and curated lists. The goal is to appear in the sources that AI systems trust, before you worry about appearing in the AI outputs themselves.
The firms in the middle tier face a choice right now: invest in closing the gap while the top position is still contestable, or wait until the leader's advantage compounds into something structural. The data suggests the window is shorter than most assume. AI visibility leaders in most of the industries we measured have been building their citation profiles for years. Catching up is possible, but it requires a sustained effort, not a campaign.
Every industry we've measured has a clear leader, a contested middle, and a long tail of brands that are essentially invisible inside AI-generated answers. Without a baseline measurement, you're making strategy decisions without the data. That's the starting point.
See where your brand ranks in AI responses
The AEO Industry Index measures brand visibility across ChatGPT, Gemini, Perplexity, and Claude. Find out where you stand, and what it takes to move up.
Explore AEO & GEO Services →AEO, or Answer Engine Optimization, is the practice of improving how visible and prominently a brand appears inside AI-generated responses from systems like ChatGPT, Gemini, Perplexity, and Claude. Unlike traditional SEO, which targets search engine rankings, AEO targets the AI consideration set: which brands AI systems recommend when users ask questions about a category or request a recommendation.
AI systems recommend brands based on patterns in their training data and, for retrieval-augmented models like Perplexity, real-time web sources. Brands that appear frequently in authoritative third-party sources, industry directories, independent editorial coverage, and peer review platforms are more likely to be cited. The underlying logic mirrors trust signals in traditional search, but the inputs are weighted differently across platforms and update on different schedules.
Different AI platforms weight different source types. Perplexity draws heavily from live web search, so brands with strong recent press coverage tend to perform better there. ChatGPT relies more on pre-training data, favoring brands with strong historical editorial presence. Gemini and Claude have their own source weightings and retrieval behaviors. This divergence is why a cross-platform AEO strategy requires building citation depth across multiple source types simultaneously.
AI Share of Voice measures the percentage of total entity mentions within a category that belong to a specific brand, across all prompts and platforms tested. If a brand is mentioned 120 times out of 300 total entity mentions in its category, its AI Share of Voice is 40%. It is the AEO equivalent of search visibility, and it is one of the strongest predictors of whether a brand is inside or outside the AI consideration set.
The core lever is becoming citable by the sources AI systems trust. This means earning mentions in industry publications and directories, building a structured data presence, generating independent editorial coverage, and accumulating peer review signals. Paid advertising does not directly influence AI citation rates. The process is slower than paid channels but produces durable visibility that compounds over time as AI models retrain on an increasingly favorable citation profile.
Entity Mention Frequency (EMF) measures how often a specific brand name appears across all AI prompts tested for an industry. It is the most direct measure of raw visibility: did the AI mention this brand, and how many times? EMF alone does not distinguish between a first-position mention and a later reference, which is why it is combined with Position Power Score, Recommendation Rate, and other metrics in the composite AEO score.
The AEO Industry Index uses a five-metric composite scoring model:
Each industry data set uses 60–65 prompts run across four AI platforms, with a minimum mention threshold of n ≥ 5 to appear in ranked outputs. The March 2026 data set covers 62 brands across five industries.
Structured data markup follows Schema.org8/a> standards for Organization, Article, and LocalBusiness entities, which inform how AI platforms categorize and retrieve brand information.
The full five-industry report includes entity-level scores, platform breakdowns, and the complete scoring methodology. Get in touch about running this analysis for your industry.