You are about to sign a contract with an AEO agency. Before you do, you need to understand one thing: this is not the same evaluation as hiring an SEO agency.
With traditional SEO, the deliverables are legible. Keywords, ranks, backlinks. You can benchmark any agency against competitors on measurable criteria. The market is 25 years old. Bad agencies get exposed fast.
AEO is different. The category is two years old. Most agencies saying the words "answer engine optimization" are classic SEO shops who added three new slides to their deck in late 2024. The deliverables are harder to verify because citation share, entity architecture, and retrieval telemetry are invisible to most buyers. And the stakes are higher than they look — per Gartner's 2024 forecast, traditional search volume is projected to drop 25% by 2026 as generative AI absorbs top-of-funnel demand. The brand that gets cited in ChatGPT and Perplexity during that transition wins disproportionate share. The brand that doesn't gets erased from the buyer's consideration set before the first sales call.
This guide gives you 20 due-diligence questions across five categories. Each question tells you what to ask, why it matters, and what separates a good answer from a polished non-answer. By the end, you will be able to tell the difference between an agency that has done this work and one that is describing the work at a level that sounds credible but collapses under examination.
TLDR:
- Choosing an AEO agency requires different evaluation criteria than choosing an SEO agency — the deliverables are less visible and the market is full of rebranded SEO shops.
- Per Semrush's 2025 AI Search Report, 65% of US adults already use generative AI for questions they previously Googled. The agencies winning B2B SaaS citation share now will hold it compoundingly through 2027.
- Ask 20 specific questions across measurement, entity process, content strategy, infrastructure ownership, and team structure. Vague answers to these questions are disqualifying signals, not minor concerns.
- Per Clutch's 2024 agency evaluation data, buyer regret in digital agency engagements is most commonly attributed to unclear deliverable definitions and mismatched measurement frameworks — not price.
- The red flags that disqualify agencies immediately are listed in section nine of this post. Read them before your next pitch call.
The 5 Categories of Due Diligence Questions
AEO agency due diligence falls into five buckets that map to the five ways an engagement can fail:
- Measurement and reporting — if they can't measure citation share, they can't manage it
- Entity and schema process — the technical foundation most agencies skip or fake
- Content strategy — how they produce content that actually retrieves vs. content that only reads well
- Infrastructure ownership — whether you are building assets or renting results
- Team and process — whether the strategy team and execution team are the same people
Work through all five. A strong scorecard on three and weak on two is not a passing grade. Each category can independently cause the engagement to fail. You need coverage across all five.
Category 1 — Measurement & Reporting
This is the category that exposes rebranded SEO shops fastest. Traditional SEO measurement — keyword rankings, organic traffic, backlinks — tells you nothing about AI citation share. An AEO agency that reports on keyword ranks is optimizing for a different thing than what it claimed to sell you.
Question 1: How do you define and measure citation share?
Citation share is the percentage of relevant queries in your tracked query set where your brand is cited inside a generative AI answer. It is the AEO equivalent of organic market share. A credible agency can define this precisely: the number of queries monitored, the AI systems tracked, how the citation is detected (direct brand mention, URL citation, unnamed attribution), and how the metric is trended over time. A weak answer sounds like: "We monitor your brand presence in AI tools." That is not a measurement methodology. It is a description of watching. Good agencies name the exact query set, the AI systems covered, and the cadence. They show you a sample report with citation counts broken out by system.
Question 2: Which AI engines do you track and how often?
The minimum credible AEO coverage in 2026 is ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, and Microsoft Copilot. Per Semrush's 2025 AI Search Report, Perplexity and Google AI Overviews together drive more B2B commercial query volume than ChatGPT in enterprise-adjacent categories. An agency that tracks only ChatGPT is covering one retrieval surface. An agency that tracks all six is building a complete picture. Frequency matters: monthly tracking is the minimum. Agencies that report quarterly are monitoring slow enough that you can lose share to a competitor before you get a report showing it happened.
Question 3: Can you show a citation baseline before we start?
A citation baseline is a document that shows where your brand is and is not cited across a defined query set at the start of the engagement. It is the before-state that makes all future reporting meaningful. Without a baseline, "citation share improved" is a claim with no denominator. A strong answer: "Yes, we run a 100-200 query baseline in week one, before any work starts, and we give you the full report." A weak answer: "We set up tracking after onboarding and monitor as we go." The latter means you will never have a defensible before-state, which conveniently makes it impossible to attribute outcomes to the agency's work.
Question 4: What does your weekly reporting look like versus monthly?
Weekly and monthly serve different purposes. Weekly reporting is operational: what shipped, what got indexed, what citation movements were detected. Monthly reporting is strategic: citation share trend, entity consistency score, content coverage gaps, wins and misses against 90-day milestones. A good agency can describe both cadences, what goes in each, and who on your team receives what. An agency that describes only one type of report is probably doing only one. If weekly reporting is vague or "we'll check in as needed," you are not in an active managed engagement — you are in a retainer where the agency decides when you get information.
Question 5: Do you separate correlation from causation in attribution?
This is an integrity question. AEO outcomes are hard to isolate because citation share, organic traffic, and pipeline can all move simultaneously for reasons unrelated to AEO work — product launches, PR coverage, competitor exits. A credible agency acknowledges this and builds attribution frameworks that track what they can control: specific citation appearances on specific queries, schema deployment dates, content publish dates, and subsequent indexation. They will tell you what they can and cannot attribute. An agency that attributes every metric improvement to its work, every time, without caveat, is either dishonest or hasn't thought carefully enough about its own reporting.
Category 2 — Entity & Schema Process
Entity architecture is the infrastructure layer of AEO. Without it, retrieval-optimized content floats without an anchor. Without it, LLMs can't reliably identify your brand as a distinct entity — which means citations can go to competitors with cleaner entity signals, or to no one. This is also the category where most rebranded SEO agencies are most obviously hollow.
Question 6: Do you build entity architecture before touching content?
Entity architecture comes first. That means mapping your brand, founders, executives, products, services, and core concepts as named entities with consistent identifiers before writing a single word of content. The identifier layer — schema sameAs connections to Wikidata, LinkedIn, Crunchbase, G2, and industry directories — is what allows LLMs to recognize your brand across different surface types. Good answer: "We run an entity audit in week one and don't deploy content until the entity layer is established." Weak answer: "We optimize content and build entity signals as we go." Content deployed before entity architecture is content that retrieves inconsistently at best and doesn't retrieve at all at worst. The sequence matters.
Question 7: What is your Wikidata and Knowledge Graph process?
Wikidata is the public, editable knowledge graph that feeds Wikipedia infoboxes, Google's Knowledge Panel, and a significant portion of what LLMs know about brand entities. An AEO agency should have a documented process for creating or verifying your brand's Wikidata entry, adding sameAs identifiers, connecting related entities (founder, investors, products, competitors), and monitoring for drift or vandalism over time. Good answer: "We create the Wikidata entry in week two, add sameAs to your key professional profiles, and monitor the entry quarterly." Weak answer: "We recommend editing your Wikipedia page." Those are different things. Wikidata is the structured entity layer. Wikipedia is the prose narrative. They are connected but not the same. Conflating them signals the agency doesn't operate at the entity graph level.
Question 8: Which schema types do you implement and why?
Schema markup is not a checkbox. A serious AEO agency deploys a deliberate schema stack based on your entity type and content inventory. For a B2B SaaS brand, the minimum credible stack includes Organization (full property coverage including sameAs, contactPoint, areaServed), Service or SoftwareApplication, FAQPage on question-first content, Article or BlogPosting on long-form content, Person for named authors and executives, and BreadcrumbList across the site. The agency should be able to tell you which types they deploy, which properties they cover within each type, and which types they skip and why. A weak answer is "we add FAQ and Article schema to your key pages." That's three types with two properties each. It is not a schema strategy.
Question 9: How do you handle entity disambiguation when our brand name has conflicts?
This question is directly relevant if your brand name is shared with a company in a different industry, a person with public presence, a concept, or a geographic entity. A well-known example in local market dynamics: a brand named "Zealous Digital" targeting a Canadian B2B SEO audience competes for entity recognition with an unrelated company called "Zealous Digital Expert" based in Pakistan — a confusion documented in GSC data showing 309 impressions at 2% CTR from entirely wrong-intent queries. An AEO agency without a disambiguation process will not detect this problem. It will let the conflated entity signal dilute your citation share and potentially direct LLM citations toward the wrong brand. Good answer: the agency describes a specific disambiguation workflow — entity boundary definitions, competitor-adjacent brand signals, schema nameAlternateName and sameAs negative signals, and monitoring for merged Knowledge Panel entries. If the agency has never heard of entity disambiguation or says "that's not usually a problem," they are not operating at the level of sophistication AEO requires.
Category 3 — Content Strategy
Content is where AEO work becomes visible to readers and to LLMs. But the content strategy for retrieval is structurally different from classic SEO content. Keyword density optimization, long-form content for time-on-page, and editorial calendars driven by volume estimates are SEO playbooks. AEO content is driven by buyer question mapping, fact-density targets, and information-gain analysis. The questions in this category separate agencies that have adapted their content model from those still running a 2022 SEO content playbook.
Question 10: Do you produce content or only optimize existing pages?
Both are legitimate scope items. Neither alone is sufficient. An agency that only optimizes existing pages can improve retrieval on what you already have but can't fill the gaps — the buyer questions your site doesn't answer yet. An agency that only produces new content may be ignoring high-authority existing pages that are close to retrieval threshold and need restructuring rather than replacement. A strong content strategy does both: audit existing pages, identify which are worth optimizing, identify which are below salvage, and produce new retrieval-engineered pages for the unanswered question clusters. Good answer: "We start with an inventory audit, identify your top 30-40 pages for optimization, and simultaneously map the query gaps your site doesn't cover at all."
Question 11: How do you identify which buyer questions to target first?
Prioritization in AEO content should be driven by four signals: query volume (buyer frequency), commercial intent stage (TOFU, MOFU, BOFU), current citation share (where competitors are winning you aren't), and information gap (where no credible source exists, meaning you can own the answer). An agency that prioritizes based only on search volume is applying SEO logic to an AEO brief. LLMs often cite sources on low-volume, high-intent, highly specific questions because those are the questions where no clear dominant source exists. Good answer: "We map questions to an intent stage and current citation-share gap first, then overlay search volume as a tiebreaker." Weak answer: "We use keyword research tools to find the highest volume questions in your category."
Question 12: What is your information-gain process — how do you ensure content beats what is already ranking?
Information gain is the concept that new content must add something to the conversation that existing top sources do not already say. LLMs are trained on existing web content. If your page says what every other page already says, it has no information gain and retrieves only on volume and authority signals — both of which are hard to win in a short engagement. Information gain requires sourcing original data, surfacing proprietary client insights, commissioning primary research, or synthesizing competing sources in a way that produces a more complete answer than any single existing source. Good answer: "We review the top 5-10 sources on every target question before writing, identify the information gaps, and brief content around what's missing." Weak answer: "We write comprehensive content that covers the topic thoroughly." Comprehensive is not the same as differentiated.
Question 13: Do you check Perplexity and Google PAA before writing any BOFU page?
This question distinguishes agencies that do real pre-writing research from agencies that brief from a content template. Perplexity shows you exactly what an AI answer engine currently says about a topic — including which sources it cites and what claims it surfaces. People Also Ask (PAA) boxes on Google show you the real follow-up questions buyers have after the primary query. Both should inform every BOFU page before a word is written. Checking Perplexity tells you what the AI currently believes, which shows you the claims your content needs to confirm, challenge, or add to. Checking PAA tells you the related questions the content needs to answer in order to be a complete retrieval candidate. Most SEO agencies do not do either of these steps. Agencies that have genuinely rebuilt their content process for AEO do both as standard pre-writing protocol. If the agency gives you a blank look on this question or says "we use Semrush for content research," you are looking at a traditional SEO content workflow with a new label.
Category 4 — Infrastructure Ownership
This category is where some of the most aggressive misalignment in AEO agency engagements lives. Schema, entity graphs, citation tracking query panels, and content infrastructure can all be built on your domain and transferred to you — or they can be built on proprietary agency tooling that you lose access to the day you off-board. The difference is the difference between owning an asset and renting a result.
Question 14: Do you build on our infrastructure or your proprietary platform?
The correct answer is your infrastructure — your CMS, your schema implementation, your domain, your Google Search Console, your citation tracker. An agency that builds entity maps in their own proprietary tool, deploys schema through their own SaaS layer, and runs citation reports inside their own platform is creating a situation where everything they built disappears if the relationship ends. You end the engagement with nothing portable. The Clutch 2024 agency evaluation data shows that vendor lock-in — the inability to take deliverables to a new provider — is among the top three sources of buyer regret in digital marketing engagements. Ask this question early. If the answer is vague or involves proprietary platform mentions, read the infrastructure ownership section carefully before proceeding.
Question 15: What happens to our schema and entity work if we end the engagement?
This is the portable-infrastructure test. Every schema deployment should live as JSON-LD blocks in your CMS — copyable, transferable, not dependent on any agency access credentials. Entity maps should be delivered as documented files you own. Wikidata entries belong to Wikidata — but your documented entity architecture, the decisions behind it, the sameAs identifiers chosen, and the monitoring protocol should all transfer to you. Good answer: "Everything we build is deployed on your infrastructure. At offboarding, you receive a full documentation package covering every schema type deployed, the entity architecture decisions, and the citation tracking query set." Weak answer: "Our platform manages the schema and we'd work out a transition plan if you decided to leave." That means you are on the agency's platform, not yours.
Question 16: Do you use rented SaaS tools that hold data hostage after offboarding?
Some citation tracking platforms and entity management tools require agency login credentials to access reports. If the agency uses such a platform and the relationship ends, your historical citation data — your before-and-after, your proof of ROI — stays on the agency's account. Ask specifically: "If we off-board, do we retain full access to our citation tracking data, query panels, and reporting history?" The answer should be yes, with a concrete description of how that transfer happens. If the answer is conditional, unclear, or relies on "we'd export what we can," you are in a data-hostage situation. This is especially acute if the agency is the primary owner of the tracking platform subscription. Good practice is to be named as an account owner or admin on any third-party tool tracking your data.
Question 17: Can we run our own citation tracker independently?
Yes is the only acceptable answer. The major citation tracking tools — Profound, Goodie, Peec AI, AthenaHQ — all have direct-to-brand subscription options. You should be able to run your own citation tracker independently of any agency, even while the agency has their own tracking instance. This redundancy protects you from data gaps during transitions and gives you independent verification of the agency's reported numbers. An agency that discourages you from having your own tracker is either protecting opacity in their reporting or is uneasy about numbers being independently verified. Neither is a good sign. Good answer: "We'd encourage you to have your own tracker running alongside ours — it gives you independent verification and clean continuity if we ever transition."
Category 5 — Team & Process
AEO strategy is not a commodity skill. The number of practitioners who can authentically combine entity architecture, schema engineering, retrieval content strategy, and citation telemetry interpretation is still small in 2026. How the agency staffs engagements determines whether you get the specialist skills you're paying for or a junior account team following a playbook the senior team wrote once.
Question 18: Is the strategy team the same as the execution team?
In many agencies, the strategy team sells the engagement and the execution team delivers it. The sales call features the senior strategist. The weekly calls feature an account manager. The content is written by a freelancer hired on Upwork. This is not inherently disqualifying, but it requires that the execution team have genuine AEO skills — not just SEO skills with new vocabulary. Ask: who writes the content, who deploys the schema, who interprets the citation reports, and who you will speak to on weekly calls. Get specific names and ask about their background in AEO specifically — not digital marketing generally. Per Semrush's 2025 AI Search Report, AEO as a named specialty was practiced by fewer than 3% of digital marketers as recently as 2024. Be skeptical of large agencies claiming to have large AEO-specialist teams.
Question 19: How many active clients does our account manager handle?
AEO is not a set-and-forget channel. It requires active monitoring of citation movements, content adjustments when retrieval patterns change, and regular schema updates as LLMs update their training data and retrieval models evolve. An account manager carrying more than 8-10 active AEO engagements simultaneously cannot do this well. Beyond a certain ratio, "account management" becomes "checking in when the client asks." Ask this directly. If the answer is "I don't know offhand" or "it varies," that's a flag. Good agencies know their capacity and can tell you a specific number. If the ratio is above 10:1 for an AEO engagement, execution depth is compromised.
Question 20: What does a typical 90-day onboarding look like?
The 90-day plan should be specific and milestone-driven, not a description of engagement phases without dates. You should hear: a citation baseline by day 7, entity audit by day 10, schema deployment on priority pages by day 21, first retrieval-engineered content published by day 35, first citation report showing baseline vs. week-3 status by day 45, full 30-page content plan with prioritized question cluster by day 60, mid-point review call at day 75, 90-day performance review against baseline at day 90. If the 90-day plan is described in general terms — "we start with discovery, then move into execution, then optimize" — you are being given a template, not a plan. The specificity of the 90-day onboarding description is a direct proxy for how structured the agency's delivery is. Vague onboarding leads to vague results.
Red Flags That Disqualify an AEO Agency Immediately
Some patterns are not weak answers — they are outright disqualifying. Walk away from any agency that does any of the following.
Guarantees top citations within 30 days. LLMs re-index on their own schedules. No agency controls when a generative system updates its retrieval weights for your domain. A 30-day citation guarantee is either a fabricated KPI (they'll report something that looks like a citation without it being a meaningful commercial citation) or an indication they don't understand how retrieval indexation works.
Can't define "citation share" or "entity clarity" in a specific, operational way. These are the core measurement concepts of AEO. If the pitch team can't define them on a call without fumbling or retreating to vague descriptions, the delivery team is not operating at this level. This is not a nuanced technical concept. It's the equivalent of an SEO agency not knowing what keyword rank means.
Reports only keyword rankings, not citation metrics. A keyword rank report is a traditional SEO deliverable. If the sample report they show you is a Semrush or Ahrefs position-tracking export with no citation data, they have not rebuilt their reporting infrastructure for AEO. The deliverable is unchanged; only the cover page got a new name.
Owns your schema on their platform, not your CMS. As described in the infrastructure section above, this is the lock-in pattern that strips you of all owned AEO assets at offboarding. The first discovery call should reveal whether they deploy schema through their platform or yours. If it's their platform, every piece of schema deployed becomes a rented asset.
No named author attribution on their own content. This is a credibility signal the agency controls completely. If the agency's own blog posts have no named authors — no byline, no bio, no LinkedIn connection — they are not practicing what they sell. AEO for B2B SaaS requires E-E-A-T signal from named, credentialed authors. An agency that doesn't do this on its own site either doesn't believe it matters or doesn't prioritize it. Either way, it tells you something real.
How to Compare Two Shortlisted AEO Agencies
You've done the 20 questions. Two agencies made the cut and both gave credible answers. Here is how to run the final comparison.
Request a citation baseline report for your domain, not a pitch deck. Ask both agencies to run a 50-query citation audit on your domain before the final proposal. This is a reasonable pre-sales ask for an engagement of this size. The agency's willingness to do it, the speed with which they deliver it, and the quality of the output tell you more about their operational capability than any number of additional calls. An agency that declines because it's "too much work before signing" is telling you about their pre-sales investment level — which correlates with their post-signing investment level.
Request a 90-day plan with milestones, not a vague retainer proposal. The proposal should contain week-by-week deliverables for at least the first 12 weeks, specific citation metrics targeted at day 30, day 60, and day 90, and a named set of pages to be optimized or created in the first 90 days. Retainer proposals that describe "ongoing AEO strategy, content production, and reporting" without specifics are structured to make deliverable accountability as fuzzy as possible. Compare the two proposals side by side on specificity of milestones, not on quality of design or length of capability descriptions.
Ask for client references in your specific vertical — B2B SaaS, not e-commerce. AEO for B2B SaaS is different from AEO for consumer e-commerce. The query set is different, the buyer journey is longer, the content depth requirements are higher, and the citation measurement surfaces are different. An agency with strong e-commerce AEO results has not necessarily demonstrated B2B SaaS AEO capability. Ask for two or three references in B2B SaaS specifically. If the agency can't produce them, they either don't have B2B SaaS clients or don't have B2B SaaS clients whose results they're proud to show. Neither is a positive signal for a B2B SaaS buyer.
How Zealous Digital Answers Every One of These Questions
This post is written for buyers doing genuine due diligence. That means you should be able to apply these questions to us. Here is how Zealous Digital answers the 20.
Measurement. We define citation share as the percentage of tracked queries where Zealous's client brand is cited inside a named generative system's response, tracked at the individual query level. We track ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, and Microsoft Copilot. We run a 100-200 query citation baseline in week one before any engagement work begins. Weekly ops updates cover what shipped and what indexed. Monthly strategic reports cover citation share trending, entity consistency, and 90-day milestone progress. We separate what we can attribute (specific schema deployments, specific content publications, specific citation appearances) from what correlates but can't be directly attributed.
Entity process. We build entity architecture before touching content. That means an entity audit in week one, Wikidata entry creation or verification in week two, and schema deployment on priority pages before the first new content page is published. Our schema stack for B2B SaaS includes Organization, Service, FAQPage, Article, Person, and BreadcrumbList with full property coverage — not a three-property stub. We have a documented entity disambiguation process. When a client's brand name conflicts with another entity — a different company, a geographic term, a product category — we run a disambiguation audit, add nameAlternateName and sameAs negative signals where applicable, and monitor for merged Knowledge Panel entries quarterly. This is a direct response to a documented real-world problem: brand names shared with unrelated entities in different markets dilute citation share and misdirect LLM attribution. A good AEO agency has a process for this. We do.
Content strategy. We produce content and optimize existing pages — both, based on an inventory audit that runs in parallel with the entity audit. We identify buyer questions through a four-signal model: intent stage, citation gap, search volume, and information gap. We run a Perplexity check and a Google PAA review before writing every BOFU page. This is not an optional step — it's in our content brief template. The AEO Agency service page describes the full content production workflow. We also produce GEO Agency services for clients who need broader generative retrieval coverage beyond direct Q&A.
Infrastructure ownership. Everything we build is deployed on your CMS, your domain, your Search Console account. You are added as owner or admin on any third-party tool tracking your citation data. At offboarding, you receive a documentation package covering every schema type deployed, all entity architecture decisions, and the full citation tracking query set — in formats you can hand to a new agency or in-house team. We actively encourage clients to run a parallel citation tracker. If you want to understand what that looks like in practice, Entity Building describes the infrastructure ownership model in detail.
Team and process. Strategy and execution are the same team at Zealous Digital. The strategist on your onboarding call is the strategist writing your content briefs and interpreting your citation reports. Account managers carry a maximum of eight active AEO engagements. The 90-day onboarding is milestone-driven: citation baseline by day 7, entity audit complete by day 10, schema deployed on top 20 pages by day 21, first retrieval content published by day 35, first citation comparison report (week 1 vs. week 5) by day 45, full content plan by day 60, mid-point review at day 75, 90-day performance review at day 90.
If you want to see how Zealous Digital prices and scopes AEO engagements, the AEO Agency cost post covers the full pricing model. If you want to understand what AEO is before evaluating agencies, What Is an AEO Agency is the right starting point. External verification sources: Clutch.co agency ratings for independent client reviews, and Gartner's 2024 future of search research for the underlying market data.
Frequently Asked Questions
What is the single most important question to ask an AEO agency before signing? Ask them to show you a citation baseline report for a client in a comparable industry. Not a deck, not a case study with vague percentage improvements — an actual report showing citation share per AI system, per query, before and after a defined engagement period. If they can't produce one, they either don't have one or don't want you to see one. Either answer tells you what you need to know.
How is hiring an AEO agency different from hiring a traditional SEO agency? The evaluation criteria are different because the deliverables are different. Traditional SEO agencies are evaluated on keyword strategy, content quality, link-building approach, and reporting dashboards — all of which have 25 years of buyer fluency behind them. AEO agencies need to be evaluated on citation measurement methodology, entity architecture process, schema deployment depth, content information-gain protocol, and infrastructure ownership model. Most buyers don't have fluency on these dimensions yet, which is exactly why AEO agencies can fake the category without being caught until month four of a retainer.
How many agencies genuinely do AEO vs. rebrand SEO? Per Semrush's 2025 AI Search Report, fewer than 3% of digital marketers named AEO as a primary specialty in 2024. The number has grown since then, but the category is still new enough that the majority of agencies using the term "AEO" in their positioning are traditional SEO shops who have added citation language to their pitch without rebuilding their delivery infrastructure. The 20 questions in this post are specifically designed to detect that gap.
What should a 90-day AEO engagement milestones look like? Day 7: citation baseline delivered. Day 10: entity audit complete, Wikidata entry verified or created. Day 21: schema deployed on priority pages. Day 35: first retrieval-optimized content published. Day 45: first citation comparison report. Day 60: full content plan delivered. Day 75: mid-point strategic review. Day 90: 90-day performance review against baseline. Any 90-day plan that can't be described at this level of specificity is not a plan — it is a description of activity categories.
What does good citation share growth actually look like for a B2B SaaS brand? A credible engagement targeting a mid-competition B2B SaaS category should show measurable citation share lift on a 50-query panel by day 45-60. "Measurable" means citations on queries where none existed at baseline, not ranking improvement on traditional search. By day 90, a strong engagement will show citations on 15-25% of the monitored query set. By month six, well-executed AEO work typically shows citations on 40-60% of a targeted query set for the brand's core category — depending on competitive intensity and content investment level.
Is it worth asking AEO agencies for references in my vertical? Yes, and this is one of the most underused evaluation steps in the category. An agency's e-commerce AEO results, consumer brand AEO work, or local business AEO outcomes tell you very little about their B2B SaaS capability. The query types are different (commercial intent long-tail vs. informational), the content depth is different (technical evaluation content vs. product pages), and the citation surfaces matter differently (Perplexity and ChatGPT dominate B2B SaaS buyer research more than AI Overviews in some categories). Ask for two or three B2B SaaS references specifically — and call them.