Reports & Outputs
Section Overview
This section explains how to read and interpret your AI LocalRank audit report — understanding scores, findings, and diagnostic outputs.
How to Read Your Report
Report Structure
Your AI LocalRank report is organized into distinct sections, each providing a different view of your AI visibility:
- OVERVIEW DASHBOARD — High-level summary across all platforms
- PLATFORM CARDS — Per-platform confidence scores and details (ChatGPT, Perplexity, Gemini, Claude, Grok)
- MODULE SECTIONS — Detailed diagnostics for each analysis area (Entity, Access, Listings, Schema, AI Policy, Intent)
- POWER PLAN — Prioritized recommendations organized by fix type
Reading Flow
- Start with Overview — Get the high-level picture
- Review Platform Scores — Understand per-platform visibility
- Explore Modules — Investigate specific diagnostic areas
- Check Power Plan — See what actions are recommended
Understanding Confidence Scores
What Confidence Means
Confidence scores represent how likely an AI platform is to answer questions about your business accurately and helpfully.
| Score Range | Interpretation |
|---|---|
| 80-100% | High confidence — AI can answer most questions accurately |
| 60-79% | Moderate confidence — AI can answer but may hedge or have gaps |
| 40-59% | Low confidence — AI will likely give partial or uncertain answers |
| 0-39% | Minimal confidence — AI will likely omit or give incorrect information |
What Confidence Is NOT
- Not a ranking — It does not indicate your position relative to competitors
- Not a guarantee — AI platforms control their own behavior
- Not permanent — Scores change as your data and AI platforms evolve
- Not uniform — Different platforms may have different confidence levels
Score Components
Each confidence score is derived from the DxExA framework:
Confidence = D × E × A
Where:
- D = Discoverability (Can AI find you?)
- E = Evidence (Can AI answer questions?)
- A = Actionability (Can AI help users act?)
Your report shows how each component contributes to the final score.
Confidence vs Visibility
The Distinction
Confidence and Visibility are related but distinct concepts:
| Concept | Definition |
|---|---|
| Confidence | How accurately AI can answer about you |
| Visibility | How often AI includes you in responses |
Why This Matters
A business can have:
- High confidence, moderate visibility — AI answers accurately when asked specifically, but doesn't proactively recommend
- Moderate confidence, high visibility — AI mentions frequently but with hedging or uncertainty
- High confidence, high visibility — The ideal state: accurate answers and proactive recommendations
Factors That Affect Each
| Confidence Drivers | Visibility Drivers |
|---|---|
| Data completeness | Competitive position |
| Source consistency | Intent coverage |
| Structural clarity | Authority signals |
| Actionability | Recency of signals |
Answer Status Categories
Found / Partial / Missing
For each scenario, AI LocalRank determines an answer status:
Found — AI has sufficient information to answer confidently
- All required data is present
- Sources agree
- Action paths are available
Partial — AI can answer but with gaps or uncertainty
- Some information is missing
- Sources may conflict
- Answer may be hedged
Missing — AI cannot answer reliably
- Critical information is absent
- Too many conflicts
- No confidence in response
Status Distribution
Your report shows the distribution of statuses across scenarios and platforms, indicating where your visibility is strong vs weak.
Drop Reasons & Failure Modes
What Are Drop Reasons?
Drop reasons explain why AI fails to recommend or mention your business for specific intents or scenarios.
Drop Reason Categories
| Category | Description | Example |
|---|---|---|
| DataGap | Required information is missing | No hours data for "open now" query |
| LanguageGap | AI cannot match your terms to user terms | You say "legal services"; users ask for "lawyer" |
| GeoGap | Location mismatch or uncertainty | Your service area unclear |
| OfferGap | AI doesn't know you provide what user wants | No menu for restaurant query |
| AuthorityGap | Competitors have stronger signals | Others have more reviews/citations |
| Misclassification | AI has wrong category for you | Listed as bar when you're a restaurant |
| Hallucination | AI has incorrect information | Wrong hours, wrong address |
How to Use Drop Reasons
Each drop reason indicates a specific type of fix:
| Drop Reason | Typical Fix |
|---|---|
| DataGap | Add missing information to relevant sources |
| LanguageGap | Align terminology across your digital presence |
| GeoGap | Clarify service area and location |
| OfferGap | Document services/products in structured data |
| AuthorityGap | Build citations and reviews |
| Misclassification | Correct category in GBP and Schema |
| Hallucination | Fix conflicting sources causing incorrect data |
"What If" Scenarios
Purpose
"What If" projections show what AI would likely say if specific issues were fixed.
How They Work
For significant drop reasons, AI LocalRank generates a projection:
Current State: "I don't have reliable information about [Business]'s hours."
What If (hours data added): "[Business] is open today until 9 PM. You can reach them at [phone]."
Interpretation
- What Ifs are projections, not guarantees
- They illustrate the potential impact of fixes
- They help prioritize which issues to address first
- They show what's possible with improved data
Platform-Specific Insights
Why Platforms Differ
Your report may show different scores across platforms because each AI platform:
- Uses different primary data sources
- Weights signals differently
- Has distinct behaviors for uncertainty
- Prioritizes different types of corroboration
Reading Platform Cards
Each platform card shows:
| Element | What It Tells You |
|---|---|
| Confidence Score | Overall platform-specific visibility |
| Scenario Results | How specific questions would be answered |
| Key Drivers | What's helping or hurting this platform |
| Recommendations | Platform-specific improvement suggestions |
Agreement Meter
What It Shows
The Agreement Meter measures how consistently AI platforms would answer about your business.
| Agreement Level | Interpretation |
|---|---|
| High Agreement | Platforms give consistent answers |
| Moderate Agreement | Some variation but core facts align |
| Low Agreement | Platforms give different or conflicting answers |
Why Agreement Matters
- High agreement suggests stable, reliable AI visibility
- Low agreement indicates conflicts or gaps in your data
- Platform-specific disagreement points to which sources need attention
Diagnostic Flags
What They Are
Diagnostic flags are the top issues affecting your AI visibility, prioritized by impact.
Flag Structure
Each flag includes:
| Element | Description |
|---|---|
| Issue Title | What the problem is |
| Severity | How much it affects visibility |
| Affected Platforms | Which platforms are impacted |
| Evidence | The data behind the finding |
| Recommendation | What to do about it |
Evidence Views
What Evidence Shows
Evidence views display the actual data that drives your scores and findings.
Types of Evidence
| Evidence Type | What It Contains |
|---|---|
| Source Data | Raw information from GBP, website, directories |
| Schema Markup | Structured data found on your website |
| Directory Listings | Where you appear and with what information |
| Citation Mentions | External references to your business |
| Conflict Details | Specific disagreements between sources |
Using Evidence
- Verify findings — Check that the data is accurate
- Identify specifics — See exactly what needs fixing
- Track sources — Know where problems originate
- Confirm fixes — Re-audit to see updated evidence
How to Interpret Low Scores
Low Score ≠ Bad Business
A low AI visibility score does not mean your business is bad. It means:
- AI platforms lack the information they need
- Your data may be fragmented or inconsistent
- Technical barriers may prevent AI access
- Competitors may have stronger digital signals
Response to Low Scores
- Read the diagnostic flags — They explain what's wrong
- Check the Power Plan — It prioritizes fixes
- Focus on high-impact issues — Not everything needs immediate attention
- Re-audit after changes — Verify improvements
Report Freshness
Snapshot Model
Your report is a snapshot — a point-in-time view of your AI visibility.
Why Snapshots
- AI platforms change frequently
- Your data changes over time
- A snapshot provides a stable reference point
- Multiple snapshots enable trend tracking
Re-Auditing
- After making changes — Verify improvements
- Periodically (quarterly recommended) — Track evolution
- After platform updates — AI behavior may shift