Cybercriminals are exploiting a new attack vector in the AI era. Poisoning the public web content that large language models (LLMs) rely on, causing AI search tools to recommend fraudulent customer support numbers. Furthermore, attackers are “systematically manipulating public web content” to hijack AI-generated answers.
The emerging threat—dubbed LLM phone number poisoning—works by planting scam-optimized content across high-authority websites and user-generated platforms. Instead of targeting the LLMs directly, attackers exploit the data sources they pull from, leveraging Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) to ensure poisoned pages are used in AI summaries.
Researchers found that threat actors are uploading structured scam content—including fake airline reservation numbers—onto compromised government domains, university pages, WordPress blogs, YouTube descriptions, and Yelp reviews. Because the content is credible, high-authority, and optimized for LLM retrieval, AI systems scrape and surface it as if it were legitimate.
Examples observed in the wild include:
- Perplexity recommending a fabricated Emirates reservations number
- Google AI Overview listing multiple fraudulent call-center lines
- Poisoned responses for British Airways customer support
This tactic exposes users to phishing operations where scammers impersonate airlines and attempt to extract money or personal data. Researchers warn that the issue is systemic, not isolated from a single model or vendor. Even when AI answers are correct, the retrieval layers often show citations linked to contaminated sources.
Experts say the attack resembles indirect prompt injection—except that instead of compromising model instructions, attackers compromise the data environment. As AI search becomes mainstream, the risk of cross-platform contamination increases.
Users are advised to verify any phone numbers or links provided by AI systems. Thus, this will avoid sharing sensitive information with AI assistants and remain cautious while AI-powered search tools mature.
Source:
https://www.zdnet.com/article/scammers-poison-ai-search-results-customer-support-scams/

