The full breakdown

Why AI Ignores Businesses

And how to know when you’ve actually fixed it.

AI doesn’t skip you at random. It finds a reason. Most articles stop there. The harder question is how you know your fix actually worked.

AI doesn’t need to be wrong to ignore you. It just needs to be unsure.

01 · Diagnosis

How AI decides who to recommend

Before AI names a business, it runs a confidence check. Here is what that check actually involves — and why your competitor often clears it when you don't.


Why does AI skip my business when it recommends my competitor?

AI isn’t picking the best business. It’s picking the one it can recommend with the least risk.

When someone asks “best dentist near me,” the AI has to name a specific business. To do that confidently, it needs to verify three things: who the business is, that it actually does what the question asks, and that other sources back that up.

If your competitor is recommended and you’re not, it usually isn’t because they’re better. It’s because their evidence is cleaner. Their phone number matches everywhere. Their website says specifically what they do. Three or four other sites confirm it.

You may have more experience, more clients, a better reputation. None of that helps if AI can’t confirm it from the open web. The competitor wins because the competitor is less uncertain — not because they’re better.

How does ChatGPT actually decide which business to recommend?

ChatGPT goes through four checks before it names a business.

Identity. Can it confirm what your business is — the name, location, contact, type of service? Conflicting info here ends the conversation early.

Capability. Does what you do match what the person asked for? “We do everything” matches almost nothing. “We handle emergency leaks 24/7 in [neighborhood]” matches a specific question.

Corroboration. Do independent sources back this up? Your own website saying you’re great isn’t evidence. A directory, a local article, a niche industry platform saying it — that’s evidence.

Access. Can AI actually read your website? If your robots.txt blocks AI crawlers, or your content only loads in JavaScript, AI has nothing to work with.

A business that passes all four gets recommended. Miss one, and AI usually moves on to the next candidate that doesn’t.

Why does Gemini recommend me but ChatGPT ignores me — or vice versa?

Each AI platform draws from different sources and weighs them differently.

Gemini leans heavily on Google’s own data — Google Business Profile, Google Maps, Google’s index. ChatGPT and Perplexity can’t fully access that, so they rely more on what’s directly on your website plus what they encounter elsewhere on the open web. Claude tends to be more conservative and pulls from a narrower set of high-trust sources.

So a fix on your Google Business Profile might surface on Gemini in days, on ChatGPT in weeks, and on Claude only after multiple independent sources confirm it.

This is why “did my fix work?” can’t be answered from one platform. A fix can land on one platform before another. If you only check ChatGPT and see no change, you’ll wrongly conclude nothing moved. The honest answer requires checking each platform separately and comparing each against the same starting point.

02 · The Input Side

What's actually broken

Most of the reasons AI skips a business come down to the information AI reads about it. These are the specific input-side failures we see most often.


Why isn't being #1 on Google enough?

Search ranks pages. AI picks one business.

Google gives the user ten links and lets them choose. Being on page one is enough — the user does the rest. AI doesn’t work that way. It picks one business and names it. If you’re not the one named, the user usually never knows you exist.

Different inputs feed different decisions. Google’s ranking weighs link authority, on-page SEO, click behavior, freshness. AI weighs entity clarity, source corroboration, and whether your content gives it something specific to actually say about you.

You can rank #1 for “best plumber in [city]” and still be invisible to ChatGPT — because your top-ranking page is full of “trusted professionals since 1998” and “industry-leading service,” which is true of ten thousand other sites. AI can’t quote that. It can quote “24/7 emergency leak repair, $89 service call, serves [three named neighborhoods].”

What blocks AI from reading my website?

Three things, in order of how often we see them.

Robots.txt blocking AI crawlers. Many website platforms block GPTBot, ClaudeBot, or PerplexityBot by default. If they’re blocked, those AIs literally have no data on your site. Check yourdomain.com/robots.txt and look for Disallow next to those bot names.

JavaScript-only content. If your service descriptions, hours, or contact info only load after the user’s browser runs JavaScript, AI crawlers usually can’t see them. The page looks fine to a human and empty to a bot.

Login walls. Anything important hidden behind a sign-in or paywall is invisible to AI. AI can only cite what’s on the open web.

Less common but worth checking: missing structured data (the JSON-LD that tells AI “this is a LocalBusiness with these hours and this phone number”), and extremely slow page loads that time out before crawlers finish reading.

Why does inconsistent business info make AI skip me?

When AI sees the same business with two different phone numbers, two different addresses, or two different opening times across sources, it treats the conflict as risk.

Recommending the wrong number means the user calls a number that doesn’t work. Recommending the wrong hours means they show up to a closed door. AI doesn’t know which version is right — and it can’t ask. Skipping you is safer than getting it wrong.

This is why a small mismatch causes a big effect. The phone number on your website says (555) 123-4567. Your Yelp page still has the old (555) 987-6543 from before you moved. AI sees two phone numbers for the same business name and address. Confidence drops. Another business with one consistent phone number across five sources becomes the safer recommendation.

The fix is mechanical: pick one canonical version of your business info and propagate it everywhere.

How many independent sources does AI need before recommending you?

In our tracking, the threshold is usually three to five.

One source isn’t enough. Your own website saying you exist is the same as no evidence — every business has a website. Two sources is barely enough; AI tends to hedge.

Four or five independent sources confirming the same business name, location, and service usually crosses the confidence line.

“Independent” matters. Five citations on five blog posts written by the same content agency don’t count as five sources — AI can often tell. What counts:

  • Your own website (one)
  • Google Business Profile or Apple Maps (one)
  • A directory or industry-specific platform — Yelp, Houzz, Avvo, niche directories (one each)
  • A local news article, “best of” roundup, or genuine third-party mention (one each)
  • Active, recent reviews on a platform AI can read (one)

Most businesses recommended consistently across AI platforms have four to seven of these. Most invisible businesses have one or two.

What kind of content does AI actually use — and what does it ignore?

AI ignores marketing copy. It uses specifics.

“Industry-leading service.” “Trusted professionals since 1998.” Ten thousand other websites say the same thing. AI has no reason to pick yours, and no factual claim it could attribute to you anyway.

What AI does use, in order of strength:

  • Specific facts: services offered, neighborhoods served, insurance accepted, languages spoken, years operating.
  • Firsthand explanations: how you actually do the work, not what the work is. A plumber explaining how they diagnose a slab leak is citable. A plumber listing “leak detection” as a service is not.
  • Answers to real questions: the actual questions people ask before hiring in your field — costs, timelines, what to expect, what goes wrong.

The reason: AI has to say something specific when it recommends you. If your content gives it nothing, you don’t get recommended — even if everything else on your site is correct.

03 · The Evidence Layer

Knowing whether a fix worked

The hardest part of AI visibility isn't making changes. It's proving the changes worked. This is where most tools stop and where the real work begins.


What fixes actually move AI recommendations — and which ones don't?

Fixes that move the needle, in rough order of impact:

  1. Make business info consistent everywhere. One canonical name, address, phone, hours, across your site, Google, directories, social.
  2. Add structured data — especially LocalBusiness schema with services, hours, areas served.
  3. Get listed on three to five independent platforms beyond your own site and Google.
  4. Replace marketing copy with specifics — real services, real neighborhoods, real explanations of how you work.
  5. Unblock AI crawlers in robots.txt.
  6. Earn genuine third-party mentions — local press, industry directories, niche platforms.

Fixes that don’t move the needle:

  • Rewriting headlines for “AI tone.”
  • Stuffing pages with question phrases to “match prompts.”
  • Buying directory listings on low-trust networks.
  • Anything sold as “rank you up on ChatGPT” or “trick the AI.”

The pattern: fixes that improve the underlying truth AI reads about your business work. Fixes that try to manipulate AI’s output don’t.

How long does it take for AI to notice a change?

It depends on the AI and what you changed.

  • Real-time browsing models (some ChatGPT modes, Perplexity, Gemini’s grounded responses) can pick up changes within days, sometimes hours.
  • Models that don’t browse rely on training data and indexed knowledge. Your fix may not show up until their next training cycle — weeks or months.
  • Third-party sources (Google, Yelp, directories) propagate at each platform’s own pace. A fix to your hours on Google can surface in Gemini before it surfaces anywhere else.

A realistic timeline for a clean, well-executed fix:

  • Days: real-time browsing platforms.
  • Weeks: indexed but not retrained models.
  • Months: the AI’s default reflex when it isn’t browsing.

Most owners check too early, see nothing, and conclude the fix didn’t work. The honest test runs over weeks, not days, and against the same questions you tested before the fix.

Why is it so hard to tell whether a fix moved the needle?

Because three things are changing at the same time as your fix.

The AI itself is changing under you. Models are updated regularly. New training data, tweaked guardrails, different scoring. A change in AI’s answer might be your fix — or might be the model.

The same question doesn’t always get the same answer. Ask “best dentist near me” twice and you can get different businesses. AI introduces variation on purpose. One run isn’t evidence; a pattern across many runs is.

The web is moving too. Your competitor publishes something. A directory updates. A review appears. Any of those can shift AI’s answer independent of anything you did.

This is what people in the AI visibility space call the attribution problem. You can’t cleanly isolate your fix from everything else moving at the same time. The honest answer isn’t “we proved causation.” It’s “we built directional evidence that holds up to scrutiny.”

How do I know if a fix actually worked?

You build a feedback loop. Three pieces, in order:

  1. A baseline before the fix. Same questions, same platforms, same conditions. Without it, every “after” reading is compared to nothing. This is the step most owners skip.
  2. A re-run after the fix. Same questions, same platforms. Not once — several times. Look for patterns: are you mentioned more often? Alongside the right questions? With the correct details?
  3. A trace from change to effect. Which fix moved which answer on which platform. Without this, “AI mentions us more” is unattributable — could be your fix, a model update, or a competitor’s bad week.

You won’t get clean causation. The reality is non-deterministic. You can get directional evidence — clear enough to show a client, honest enough to hold up.

This is the loop AI LocalRank runs: same questions, locked baseline, before-and-after comparison, fix-by-fix trace. Evidence, not promises.

Evidence, not promises.
Both for AI and for the people you serve.

See what AI actually says about your business across ChatGPT, Gemini, Claude, Perplexity, and Grok — side by side, free.