Ask ChatGPT to recommend a company in any B2B category and it will name two or three firms with apparent confidence. Ask again in a fresh session and the same names come back. This is not random. Language models draw on specific, identifiable signals to decide which companies to recommend. Understanding these signals is the difference between being named and being invisible.
It is not about who is "best"
Language models do not evaluate companies the way a human buyer does. They cannot test your product, call your support team, or visit your office. What they can do is read. They process text — billions of documents — and form associations between entities, attributes, and contexts.
When a user asks "who is the best project management tool for startups," the model does not know who is best. It knows which companies are most consistently associated with that query pattern across its training data and real-time retrieval sources. The companies that appear are the ones with the strongest signal footprint — not necessarily the strongest product.
Signal footprint is the sum of all identifiable, structured, and authoritative information about a company that a language model can access and process. A stronger signal footprint leads to more frequent and more prominent AI recommendations.
The six signals that drive AI recommendations
Based on benchmarking hundreds of queries across ChatGPT, Claude, and Perplexity, these are the signal categories that determine whether a company gets recommended.
1. Entity definition
Before a model can recommend you, it needs to know you exist as a distinct entity. This means having a clear, consistent definition across structured sources: Wikidata, Crunchbase, LinkedIn, Google Business Profile, and your own website.
The most important page on your website for AI visibility is your About page. If it is written in encyclopedic, third-person style with your company name, founding year, services, target market, and geography stated clearly in the first paragraph, models can parse it as an entity definition. If it is written in vague first-person marketing language ("we help companies transform"), the model learns nothing specific.
2. Structured data
Schema.org markup — specifically Organization, Service, and FAQPage schemas — gives language models a machine-readable description of what you do. Models with retrieval capabilities (like Perplexity) actively use structured data to ground their responses.
A company with proper Organization schema, Service schema on every service page, and FAQPage schema covering common buyer questions gives the model structured facts to cite. A company with no schema is relying entirely on the model's ability to parse unstructured text — which is less reliable and less specific.
3. Training data co-occurrence
Language models are trained on massive text corpora. If your company name frequently appears in the same context as your service category — in blog posts, industry publications, community discussions, review platforms — the model forms a strong association. If your company name rarely appears in those contexts, the association is weak or non-existent.
This is why companies with active Reddit presences, published case studies, guest articles in trade publications, and reviews on G2 or Capterra tend to appear in AI recommendations more frequently. Every mention in a relevant context is a data point that reinforces the association.
4. Citation authority
Not all mentions are equal. A mention in a respected industry publication, a podcast interview, or a well-cited article carries more weight than a self-published blog post. Models are trained to recognise authoritative sources, and recommendations from authoritative contexts propagate more strongly.
This is the AI equivalent of traditional domain authority. PR coverage, journalist quotes, and third-party expert mentions create citation signals that models weigh heavily when deciding which companies to name.
5. Review and social proof signals
Review platforms like G2, Capterra, and Trustpilot serve as structured repositories of company evaluations. Models trained on this data — or retrieving from it in real time — use review presence as a trust signal. A company with 50 G2 reviews is more likely to be recommended than one with zero, even if the product is identical.
The volume matters, but so does the recency and consistency. A burst of reviews from two years ago is weaker than steady, recent reviews that confirm the company is active and relevant.
6. Content architecture
The way your content is structured affects whether models can extract useful answers from it. Content that mirrors how people ask AI questions — with clear H2 headings, self-contained sections, and direct answers — is more likely to be surfaced and cited.
An article titled "7 Signs Your Startup Needs a Fractional CFO" with clear, numbered sections directly maps to how buyers query AI tools. An article titled "Thoughts on Financial Strategy" with rambling paragraphs does not.
Why your competitor appears and you do not
When we audit companies that are invisible to AI, the pattern is almost always the same: they have a website and maybe a LinkedIn page, but nothing else. No Wikidata entry. No Crunchbase profile. No schema markup. No G2 reviews. No community presence. No third-party citations.
Their competitor, meanwhile, has built — often by accident rather than strategy — a signal footprint across multiple sources. The competitor has been mentioned in a few Reddit threads, has a Crunchbase profile from a funding announcement, has 30 G2 reviews, and has an About page that clearly states what they do.
The gap is rarely about quality of service. It is about quantity and quality of signal.
How to audit your own signal footprint
Start with a simple test. Open ChatGPT, Claude, and Perplexity in fresh, unpersonalised sessions. Ask each one to recommend a company in your category for your target customer in your geography. Do this with five different phrasings. Record who gets named.
Then check your signal sources:
- Do you have a Wikidata entry?
- Is your Crunchbase profile complete and current?
- Does your website have Organization and Service schema markup?
- Is your About page written in encyclopedic, third-person style?
- Do you have a
robots.txtthat allows AI crawlers (GPTBot, Claude-Web, PerplexityBot)? - Do you have reviews on G2, Capterra, or Trustpilot?
- Has your company been mentioned in Reddit threads relevant to your category?
- Do you have third-party citations (press, podcasts, guest articles)?
Every "no" on that list is a missing signal that your competitors may already have in place.
NamedBy offers a free AI visibility check that benchmarks your company against competitors across ChatGPT, Claude, and Perplexity. It takes two minutes and shows you exactly where you stand.
Frequently asked questions
Can I pay to be recommended by AI?
No. As of 2026, there is no advertising product that places your company in AI-generated recommendations on ChatGPT, Claude, or Perplexity's core chat interface. Recommendations are determined by the model's training data, retrieval sources, and the strength of your entity signals across the web. This is why organic signal building is the only path to AI visibility.
Which signal matters most for AI recommendations?
Entity definition is the foundation. If a language model does not recognise your company as a distinct entity, no amount of content or reviews will help. Start with a clear entity definition (About page, Wikidata, Crunchbase, Schema markup), then build outward to reviews, citations, and community presence.
Do different AI platforms weight signals differently?
Yes. Perplexity uses real-time web retrieval, so structured data and recently published content have more immediate impact. ChatGPT and Claude rely more heavily on training data, so signals embedded in widely-distributed text (reviews, community discussions, publications) carry more weight. A comprehensive approach optimises for all three.
How quickly can I start appearing in AI recommendations?
Changes that affect real-time retrieval (structured data, unblocking AI crawlers, creating llm.txt) can show effects within days to weeks. Changes that affect training data (community mentions, reviews, publications) take longer and typically compound over 2-3 months. Most companies see their first measurable improvement within 30-60 days of systematic implementation.
Find out where your signals are missing
The free visibility check benchmarks your company against competitors across three AI platforms. Two minutes, no signup.
Free Visibility Check →