Definition

The AI Visibility Index (AVI)

AVI answers one question: when people ask AI systems for the “best” option in your market, do they pick you?

We run a standardized set of high-intent prompts (industry + location), measure your inclusion across selected AI surfaces, and turn that into a 0–100 score with an action roadmap.
The audit itself runs inside /app after checkout (token-gated) and includes a PDF export.
What we measure

Inclusion, not vibes

For each prompt and each surface, we record whether your brand is included as a recommendation (or meaningfully cited), then aggregate results across the prompt pack.

  • Per-surface outcomes (ChatGPT / Gemini / Perplexity, etc.)
  • Coverage (how many surfaces include you at least once)
  • Confidence (how stable and unambiguous the measurement is)
What we do NOT claim

No fake guarantees

AVI is not a promise of traffic or rankings. AI systems can change and can behave differently by context. The value is: you get a repeatable harness, so improvements can be tested.

  • Not a “SEO score in disguise”
  • Not a one-off screenshot
  • Not “industry standard” marketing claims

Why businesses usually don’t show up

Most misses are structural: identity clarity + trust signals + citations.

Entity ambiguity: the system can’t confidently match your brand to your site/location.
Weak trust signals: unclear About/Contact, missing structured data, thin service pages.
Citation footprint: not enough reputable mentions for the model to “risk” recommending you.
Competitive dominance: competitors have stronger signals for the same intents.

How to use AVI (the practical loop)

Run → fix the top 2–3 items → re-run to confirm movement.

1) Measure: you get a baseline score + surface breakdown.
2) Fix: start with identity + trust signals (highest leverage).
3) Verify: re-run to make sure it actually changed outcomes.
The full scoring model, evidence rules, and the glossary live in Methodology.