AI Hallucination Rates by Model: How Often LLMs Get Facts Wrong
The Brand Risk of AI Hallucinations
AI hallucinations, instances where LLMs generate factually incorrect information presented as fact, pose a real risk to brands. When an AI model incorrectly states your pricing, misattributes features, or confuses your brand with a competitor, it can mislead potential customers and damage your reputation.
Understanding hallucination rates by model helps you assess risk and prioritize monitoring efforts.
Hallucination Rates by Platform
ChatGPT (GPT-4 and later)
ChatGPT hallucination rates for factual brand information average 10 to 15 percent, depending on the specificity of the query. The model is more accurate for well-known brands with extensive web presence and less reliable for niche or newer brands.
Gemini
Gemini benefits from real-time Google Search integration, which reduces hallucination rates for current factual information to approximately 8 to 12 percent. However, it can still generate incorrect synthesized conclusions from accurate sources.
Perplexity
Perplexity has the lowest hallucination rates among major platforms at approximately 5 to 10 percent, thanks to its search-first, cite-always approach. However, it can still cite sources inaccurately or draw incorrect conclusions.
Claude
Claude hallucination rates are similar to ChatGPT at 10 to 15 percent for brand information, though Claude is more likely to express uncertainty rather than confidently stating incorrect facts.
Common Types of Brand Hallucinations
Outdated Information
Models may present old pricing, discontinued products, or former leadership as current. This is particularly common for brands that have undergone recent changes.
Feature Confusion
LLMs may attribute features from one product to another, or combine features from competitors into descriptions of your product. This is more common in crowded markets with similar products.
Entity Confusion
Brands with common names or names similar to other entities may be confused. A consulting firm named Atlas may be confused with Atlas the database or Atlas the mattress company.
Fabricated Details
In some cases, models generate entirely fabricated details such as founding dates, investor names, or product specifications that have no basis in reality.
Monitoring and Mitigation
Regular Auditing
Use Citerna to regularly audit what AI models say about your brand. Compare AI responses against your actual brand information to identify hallucinations early.
Corrective Content Strategy
When you identify persistent hallucinations, create content that clearly states the correct information in a format that LLMs can easily extract. Publish this on your website and encourage third-party sources to include accurate information.
Structured Data
Comprehensive schema markup provides LLMs with machine-readable facts about your brand, reducing the likelihood of hallucination for covered data points.
Report Mechanisms
Some AI platforms allow users and brands to report inaccuracies. Use these mechanisms when available, though response times vary.
The Importance of Proactive Monitoring
Hallucinations can persist for months if undetected. A single inaccurate AI response about your pricing or capabilities could be seen by thousands of users before anyone in your organization notices. Citerna automated monitoring catches these issues quickly, allowing you to respond before significant damage occurs.
Frequently Asked Questions
How often do LLMs hallucinate about brands?
Hallucination rates for brand information range from 5 to 25 percent depending on the model and brand visibility. Well-known brands with extensive web presence see lower rates than niche brands.
Which AI model is most accurate about brands?
Perplexity generally provides the most accurate brand information due to its real-time search and citation approach, with hallucination rates of 5 to 10 percent.
Can I correct AI hallucinations about my brand?
You cannot directly edit AI responses, but you can reduce hallucinations by publishing clear, structured, accurate information across authoritative web sources and using schema markup on your website.
Do hallucinations improve over time?
Generally yes, as models are updated and retrieval systems improve. However, new hallucinations can appear with each model update, making ongoing monitoring essential.
Monitor your brand for AI hallucinations
Start Free Trial