Artificial intelligence vulnerabilities are exploiting faster than defenders can respond, creating mounting enterprise risk. Security researchers report that four major AI vulnerabilities: autonomous agent abuse, prompt injection, data poisoning, and deepfake fraud, remain largely unresolved.
Autonomous AI agents have already been weaponized. In September 2025, Anthropic disclosed that Chinese state-sponsored hackers exploited its Claude Code tool to conduct what it described as the first large-scale cyberattack executed without substantial human intervention. The system autonomously performed reconnaissance, wrote exploit code, and exfiltrated data from approximately 30 targets. Deloitte projects AI agent adoption will rise from 23% moderate usage today to 74% by 2028, expanding exposure.
Prompt injection remains the most persistent architectural flaw. A study of 36 large language models found that 56% of prompt injection attacks succeeded across architectures. Larger models performed no better. OWASP ranks prompt injection as the top vulnerability in its LLM Top 10, and researchers warn there is no foolproof prevention because untrusted text is processed identically to trusted instructions.
Data poisoning presents a low-cost attack vector. Research from Google DeepMind indicates attackers can poison datasets for about $60, while Anthropic and the UK AI Security Institute found that as few as 250 malicious documents can backdoor a large model. Backdoors may survive fine-tuning and safety training.
Deepfake fraud targets human trust. In one documented case, a finance worker transferred $25.6 million after a video call with AI-generated impersonations of executives. Gartner forecasts that by 2028, 40% of social engineering attacks will use deepfake audio or video.
Security teams face a difficult tradeoff: delay AI adoption or deploy systems with fundamental, largely unsolved risks.
Source:
https://www.zdnet.com/article/ai-security-threats-2026-overview/

