What We Intentionally Do Not Expose
Section Overview
This section provides transparency about what AI LocalRank intentionally abstracts and does not publish — and the reasons behind these decisions.
Philosophy of Abstraction
Why We Abstract
AI LocalRank is designed to help users understand and improve their AI visibility, not to provide a technical blueprint that could be:
- Gamed rather than genuinely improved
- Misused to manipulate rather than fix
- Misinterpreted without proper context
- Rendered obsolete as we improve the system
The Balance
We balance transparency with responsibility:
WHAT WE EXPOSE
- Conceptual frameworks (North Star 4.1, DxExA)
- Module purposes and what they measure
- Evidence categories and how they're used
- Platform behavior models (conceptual)
- Score interpretations and what they mean
- Recommendation logic (fix classification)
- Methodology principles
- Limitations and boundaries
WHAT WE ABSTRACT
- Exact numerical weights and formulas
- Specific threshold values
- Internal prompt engineering
- Model routing logic
- Proprietary penalty calculations
- Platform lens implementation details
- Score band cutoff values
What We Do Not Expose
1. Exact Scoring Formulas and Weights
What is abstracted:
- Precise numerical weights for each signal
- Exact multiplication factors in DxExA
- Specific contribution percentages
Why:
- Weights are calibrated through ongoing research
- Publishing exact weights invites gaming
- Different business verticals may use different weights
- We refine weights as we learn
What we do provide:
- Conceptual importance levels (high/moderate/low)
- Relative priority of signals
- Understanding of what matters and why
2. Threshold Values and Cutoffs
What is abstracted:
- Score thresholds for Found/Partial/Missing
- Conflict severity cutoffs
- Quality tier boundaries
- Penalty application thresholds
Why:
- Thresholds are tuned to produce meaningful distinctions
- Publishing thresholds encourages optimizing to thresholds rather than genuine improvement
- Thresholds may differ by business type
- We adjust thresholds as we improve accuracy
What we do provide:
- Score band meanings (what high/medium/low means)
- Severity level explanations
- Qualitative interpretation guidance
3. Internal Prompt Engineering
What is abstracted:
- Prompts used for any LLM-based processing
- Prompt templates for narrative generation
- Instruction sets for reasoning layer
Why:
- Prompts are proprietary intellectual property
- Prompt details enable copying rather than building
- Prompts are frequently refined
- Context of prompts matters as much as content
What we do provide:
- Explanation of reasoning layer purpose
- Clear separation of deterministic vs reasoning outputs
- Transparency that LLMs assist explanation, not computation
4. Model Routing Logic
What is abstracted:
- Which models are used for which purposes
- Model selection criteria
- Fallback and retry logic
- Provider routing decisions
Why:
- Routing is an implementation detail
- Providers and models may change
- Routing optimization is ongoing
- Users need not know implementation specifics
What we do provide:
- Transparency that we use AI for some interpretation
- Clear statement that core scoring is deterministic
5. Platform Lens Implementation Details
What is abstracted:
- Exact signal weights per platform
- Platform-specific aggregation formulas
- Penalty application specifics
- Recency decay parameters
Why:
- Platform models are our core calibration work
- Publishing enables competitors to replicate
- Platform behaviors change; our models adapt
- Exact implementation is less important than conceptual understanding
What we do provide:
- Conceptual description of each platform's priorities
- Relative signal importance per platform
- Explanation of why platforms differ
6. Proprietary Penalty Calculations
What is abstracted:
- Specific penalty values for issues
- Penalty stacking logic
- Minimum floor calculations
- Interaction effects between penalties
Why:
- Penalty tuning is calibration work
- Publishing penalties invites gaming
- Penalties may differ by context
- We refine penalty logic based on validation
What we do provide:
- Explanation that issues reduce confidence
- Impact level communication (Critical/High/Medium/Low)
- Understanding that multiplicative model means one failure can collapse confidence
Why This Approach Serves Users
Encourages Genuine Improvement
When exact formulas are published, users may:
- Optimize to pass specific thresholds
- Miss the forest for the trees
- Focus on gaming rather than fixing
- Chase numbers rather than quality
By abstracting details, we encourage:
- Addressing root causes
- Improving actual digital presence
- Sustainable, meaningful improvements
- Understanding over manipulation
Protects System Integrity
Publishing implementation details could:
- Enable competitors to copy without investing in research
- Allow bad actors to game the system
- Create a race to the bottom on threshold optimization
- Undermine the value of honest diagnosis
Allows Continuous Improvement
We continuously improve AI LocalRank. Abstraction allows us to:
- Refine weights based on validation
- Adjust thresholds as platforms evolve
- Improve models without breaking user expectations
- Upgrade underlying systems transparently
What You Can Trust
Despite abstractions, you can trust that:
| Aspect | Guarantee |
|---|---|
| Evidence-based | All scores derive from observable data |
| Reproducible | Same inputs produce same outputs |
| Explained | Every score has a diagnostic explanation |
| Honest | We acknowledge limitations |
| Actionable | Recommendations are specific and classified |
| Transparent about method | We explain how, even if not exact weights |
Questions We Can Answer
Even with abstractions, we can tell you:
| Question | Answer Available |
|---|---|
| "Why is my score low?" | Yes — diagnostic flags explain |
| "What should I fix?" | Yes — Power Plan prioritizes |
| "Will this fix help?" | Yes — impact level estimates |
| "Why do platforms differ?" | Yes — platform priorities explained |
| "What data drives this?" | Yes — evidence views show sources |
Questions We Do Not Answer
Some questions require abstracted information:
| Question | Why Not Answered |
|---|---|
| "What exact score would I get if I add Schema?" | Depends on many factors; we show impact level |
| "What weight does Perplexity give to reviews?" | Exact weight is proprietary; we explain it matters highly |
| "What's the threshold for Found vs Partial?" | Thresholds are calibrated and may vary |
| "What prompt do you use for explanations?" | Prompts are proprietary |
Our Commitment
We commit to:
- Explaining what matters — You know what signals drive visibility
- Providing actionable guidance — You know what to fix and why
- Being honest about limits — You know what we can and cannot do
- Improving transparently — We update docs when methodology changes
- Protecting user interests — Abstraction serves users, not just us
Conclusion
AI LocalRank is designed to help you understand and improve your AI visibility through honest, evidence-based diagnosis. We abstract implementation details not to hide, but to ensure the system serves its purpose: helping businesses legitimately improve how AI perceives them.
What we share is everything you need to understand your situation and take action. What we abstract protects the integrity of the system and encourages genuine improvement over gaming.
This concludes the AI LocalRank Documentation.
Document Version: 1.0
Last Updated: January 2026
Status: Official Public Documentation