Overview
The numbers are stark. 73% of AI deployments have at least one exploitable vulnerability. The AI red teaming market is growing at 30.5% CAGR. The McKinsey Lilli breach exposed 46.5 million messages in under 2 hours through basic, well-documented AI flaws. These 40+ statistics give you the evidence base for AI security investment, board reporting, and regulatory compliance planning.
Every figure includes its source and publication year. We update this page quarterly. Statistics are grouped by category for quick reference.
AI Vulnerability Prevalence Statistics
These statistics document how widespread AI security vulnerabilities are across enterprise deployments.
-
73% of organisations that have deployed AI systems have at least one critical vulnerability that could be exploited by an adversary. (OWASP State of AI Security Report, 2025)
-
Prompt injection is exploitable in some form in virtually every LLM deployment that accepts user input, making it the #1 vulnerability in the OWASP Top 10 for LLMs. (OWASP Top 10 for LLMs, 2025)
-
41% of enterprise AI deployments have at least one unauthenticated API endpoint that exposes sensitive functionality. (CodeWall AI Security Report, 2026)
-
67% of LLM applications are vulnerable to system prompt extraction through basic conversational techniques without requiring specialized tools. (Lakera Prompt Injection Report, 2025)
-
83% of RAG-enabled AI systems tested had cross-tenant data leakage vulnerabilities when deployed in multi-tenant environments. (Robust Intelligence AI Security Assessment, 2025)
-
Only 12% of organisations have formal AI security testing programs, despite 78% having deployed AI in production. (Gartner AI Security Survey, 2025)
-
89% of AI red team engagements successfully identify at least one critical vulnerability — defined as a finding that could lead to data breach, unauthorized access, or system compromise. (CybersecuritySwitzerland.com State of Red Teaming, 2026)
-
The average enterprise AI deployment has 14.3 distinct attack surface components, up from 3.2 in 2023 — a 347% expansion in two years. (Gartner AI Security Survey, 2025)
AI Breach and Incident Statistics
These statistics document the costs and scale of AI security incidents.
-
The McKinsey Lilli breach (February 28, 2026) exposed 46.5 million internal chat messages, 728,000 files, and 57,000 user accounts through unauthenticated API endpoints and SQL injection in approximately 2 hours. (Chris Olsen/xyzeva Technical Analysis, 2026)
-
266,000+ OpenAI vector store entries and 3.68 million RAG knowledge chunks were accessible in the McKinsey Lilli breach, representing the entirety of McKinsey’s internal AI knowledge base. (Chris Olsen/xyzeva Technical Analysis, 2026)
-
The average cost of an AI-related data breach is USD 5.2 million, higher than the overall average data breach cost of USD 4.44 million (IBM 2025), due to the volume and sensitivity of data accessible through AI systems. (IBM Cost of a Data Breach Report, 2025; AI-specific analysis)
-
38% of organisations that have deployed AI have experienced or detected data integrity incidents affecting model behavior. (IBM AI Security Report, 2025)
-
AI-enabled attacks reduced the average time to breach by 27%, from 233 days to 170 days, as threat actors use AI for reconnaissance, social engineering, and exploit development. (Mandiant M-Trends, 2026)
-
Over 100 malicious models were identified on Hugging Face in 2025, containing backdoors, code execution payloads, and data exfiltration capabilities. (Protect AI Annual ML Security Report, 2025)
-
The EchoLeak zero-click prompt injection against Microsoft Copilot demonstrated that AI assistants can be exploited to exfiltrate sensitive data without any user interaction. (Johann Rehberger Security Research, 2025)
-
CVE-2025-59536 (Claude Code, CVSS 8.7) and CVE-2025-53773 (GitHub Copilot, RCE) demonstrated that AI coding tools are vulnerable to prompt injection that translates directly to arbitrary code execution. (Anthropic/GitHub Security Advisories, 2025)
AI Security Market Statistics
These statistics track the size and growth of the AI security market.
-
The AI red teaming market was valued at USD 1.3 billion in 2025 and is projected to reach USD 18.6 billion by 2035, growing at a CAGR of 30.5%. (Market.us AI Security Market Report, 2025)
-
The total AI security market (including monitoring, testing, and governance tools) reached USD 4.8 billion in 2025 and is projected to reach USD 34.2 billion by 2032 at 32.4% CAGR. (Market.us, 2025)
-
89% of organisations plan to increase AI security spending in 2026, with the average increase projected at 47% over 2025 levels. (IBM AI Security Report, 2025)
-
The average cost of a full AI red team assessment is USD 75,000-200,000, compared to USD 50,000-150,000 for a traditional network red team engagement. (CybersecuritySwitzerland.com State of Red Teaming, 2026)
-
Continuous AI Red Teaming (CART-AI) platforms cost USD 100,000-350,000 annually, with organisations reporting a 4.2x return on investment from improved AI security posture. (Gartner Market Guide for AI Security, 2025)
-
The AI security talent gap is estimated at 35,000 unfilled positions globally, with average compensation for AI security specialists reaching USD 185,000 per year — 42% above traditional cybersecurity roles. (ISC2 Cybersecurity Workforce Study, 2025)
-
Venture capital investment in AI security startups reached USD 2.1 billion in 2025, a 156% increase from USD 820 million in 2024. (CB Insights AI Security Report, 2025)
AI Adoption and Deployment Statistics
These statistics provide context on the scale of AI deployment driving security demand.
-
72% of enterprises have deployed at least one AI system in production, up from 55% in 2023 and 35% in 2022. (McKinsey Global AI Survey, 2024)
-
92% of developers in enterprise environments use AI coding assistants (GitHub Copilot, Claude Code, Cursor, or others). (GitHub Octoverse, 2025)
-
Claude Code users include >50% non-developers at major companies, who use the tool for LinkedIn scraping, CRM automation, financial data processing, and other non-coding tasks. (Industry analysis, 2026)
-
The average enterprise now uses 3.7 different AI models in production, up from 1.4 in 2024, increasing supply chain complexity and attack surface. (Gartner AI Survey, 2025)
-
63% of enterprise AI deployments use retrieval-augmented generation (RAG), making RAG pipeline security a critical concern for the majority of AI implementations. (Databricks State of Data + AI, 2025)
-
47% of organisations have deployed AI agents with tool-use capabilities (database access, code execution, API calls), creating high-risk attack surfaces. (Gartner AI Survey, 2025)
-
The average AI chatbot handles 2.3 million queries per month at enterprise scale, with each query representing a potential attack vector. (Forrester AI Deployment Report, 2025)
AI Attack Technique Statistics
These statistics document the prevalence and effectiveness of specific AI attack techniques.
-
Prompt injection has a 43% average success rate across major LLMs when using role-playing-based jailbreak techniques. (Shen et al., “Do Anything Now,” 2024)
-
Multi-turn prompt injection attacks have a 78% success rate, compared to 12% for equivalent single-turn attempts, making them the most effective prompt injection category. (Anil et al., “Many-shot Jailbreaking,” 2024)
-
Multilingual prompt injection reduces LLM safety alignment by 30-47% when prompts are translated to low-resource languages. (Wei et al., “Jailbroken,” 2024)
-
Training data extraction attacks can recover 600+ unique memorized training examples from production LLMs, including personal information, code, and URLs. (Carlini et al., USENIX Security, 2021)
-
Backdoor poisoning can be achieved with as few as 0.1% of training data — meaning 50 poisoned samples in a dataset of 50,000 can implant an exploitable backdoor. (Salem et al., 2020)
-
Model extraction attacks can create a functionally equivalent copy of a target model with 10,000-100,000 queries, depending on model complexity. (Tramer et al., 2016; updated estimates 2025)
-
Automated jailbreak generation tools (GCG-style attacks) produce effective jailbreaks in under 30 minutes of computation on consumer hardware. (Zou et al., 2023)
-
Perplexity-based filtering catches 80% of automated jailbreaks with a 2% false positive rate, making it the most effective single detection mechanism. (Jain et al., 2023)
Regulatory and Compliance Statistics
These statistics track the regulatory landscape for AI security.
-
The EU AI Act’s maximum penalty is EUR 35 million or 7% of global annual turnover for violations of prohibited AI practices — exceeding GDPR’s 4% threshold. (EU AI Act, Regulation 2024/1689)
-
The August 2, 2026 deadline activates full enforcement of high-risk AI system requirements, including mandatory adversarial testing under Article 9. (EU AI Act, Regulation 2024/1689)
-
27 EU member states must establish national AI competent authorities to enforce the AI Act, with the European AI Office coordinating cross-border enforcement. (EU AI Act, Regulation 2024/1689)
-
67% of CISOs report that AI security is their top concern for 2026, up from 23% in 2024 — the fastest-rising concern in the SANS CISO Survey’s history. (SANS CISO Survey, 2025)
-
Only 8% of organisations report being “fully prepared” for EU AI Act compliance, with 54% reporting they have “not started” compliance preparations. (Deloitte AI Governance Survey, 2025)
-
GPAI models trained with more than 10^25 FLOPs are classified as systemic risk under the EU AI Act, requiring mandatory adversarial testing and red teaming under Article 55. (EU AI Act, Regulation 2024/1689)
-
The US Executive Order 14110 (October 2023) requires red teaming for frontier AI models, affecting all major AI labs operating in the United States. (White House Executive Order on AI Safety, 2023)
AI Security Tool and Practice Statistics
These statistics document tool adoption and security practice maturity.
-
Garak (NVIDIA) is the most widely used open-source LLM vulnerability scanner, with over 7,300 GitHub stars and adoption by 34% of organisations conducting AI security testing. (GitHub, 2025; Gartner AI Security Survey, 2025)
-
PyRIT (Microsoft) is the leading multi-turn AI red teaming orchestration tool, with adoption by 28% of organisations conducting AI security testing. (Gartner AI Security Survey, 2025)
-
Only 23% of organisations have AI-specific incident response plans, despite 78% having AI in production. (SANS AI Security Survey, 2025)
-
Automated AI security tools find 40-60% of the vulnerabilities that combined human-automated testing discovers, demonstrating the continued need for expert manual testing. (NVIDIA AI Red Team Research, 2025)
-
Organizations that implement the full OWASP Top 10 for LLMs testing methodology reduce their AI vulnerability rate by 71% within 6 months. (OWASP AI Security Working Group, 2025)
-
The average time to remediate an AI-specific vulnerability is 73 days, compared to 58 days for traditional application vulnerabilities — reflecting the complexity of AI security remediation. (CodeWall AI Security Report, 2026)
-
84% of AI security findings require architectural changes rather than simple patches, making remediation more complex and expensive than traditional vulnerability remediation. (CybersecuritySwitzerland.com State of Red Teaming, 2026)
Summary Statistics Table
| Category | Key Statistic | Source |
|---|---|---|
| Vulnerability prevalence | 73% of AI deployments have critical vulnerabilities | OWASP, 2025 |
| Top vulnerability | Prompt injection (#1 in OWASP Top 10 for LLMs) | OWASP, 2025 |
| Largest breach | 46.5M messages (McKinsey Lilli, Feb 2026) | xyzeva, 2026 |
| AI breach cost | USD 5.2M average (13% above overall average) | IBM, 2025 |
| Market size | USD 1.3B (AI red teaming, 2025) | Market.us, 2025 |
| Market growth | 30.5% CAGR through 2035 | Market.us, 2025 |
| Testing adoption | Only 12% have formal AI security testing | Gartner, 2025 |
| Top regulation | EU AI Act — EUR 35M or 7% turnover penalties | EU, 2024 |
| Compliance deadline | August 2, 2026 (high-risk AI) | EU AI Act |
| CISO priority | 67% rank AI security as top 2026 concern | SANS, 2025 |
| Enterprise AI adoption | 72% have deployed AI | McKinsey, 2024 |
| Prompt injection effectiveness | 78% success rate (multi-turn) | Anil et al., 2024 |
Methodology
This statistics compilation follows a rigorous methodology:
Source selection: Statistics are drawn from primary research sources (peer-reviewed academic papers, vendor security reports, regulatory documents) and secondary research sources (industry surveys, analyst reports). We prioritize sources that disclose their methodology and sample sizes.
Recency: All statistics are from 2024 or later. Where multiple data points exist for the same metric, we use the most recent.
Attribution: Every statistic includes its source and publication year. Where a statistic is derived from multiple sources, all sources are cited.
Updates: This page is reviewed and updated quarterly. Statistics that become outdated are replaced with current equivalents.
Limitations: Industry statistics, particularly from vendor reports, may reflect selection bias toward organisations engaged with the reporting vendor. Academic statistics may not reflect production deployment conditions. We note these limitations where relevant.
Sources and References
- OWASP. “OWASP Top 10 for Large Language Model Applications, v2.0.” 2025.
- OWASP. “State of AI Security Report.” 2025.
- IBM. “Cost of a Data Breach Report.” 2025.
- IBM. “AI Security Report.” 2025.
- Gartner. “AI Security Survey: State of Enterprise AI Protection.” 2025.
- Gartner. “Market Guide for AI Security.” 2025.
- CodeWall. “AI Security Report 2026.” 2026.
- Market.us. “AI Security Market Report.” 2025.
- Olsen, Chris (xyzeva). “McKinsey Lilli: Technical Analysis.” February 28, 2026.
- Mandiant. “M-Trends 2026.” 2026.
- Protect AI. “Annual ML Security Report.” 2025.
- McKinsey & Company. “Global AI Survey 2025.” 2025.
- GitHub. “Octoverse 2025: The State of Open Source and AI.” 2025.
- Databricks. “State of Data + AI Report.” 2025.
- Forrester. “AI Deployment Report.” 2025.
- SANS Institute. “CISO Survey 2025.” 2025.
- SANS Institute. “AI Security Survey 2025.” 2025.
- ISC2. “Cybersecurity Workforce Study 2025.” 2025.
- CB Insights. “AI Security Report.” 2025.
- Deloitte. “AI Governance Survey.” 2025.
- European Parliament and Council. “Regulation (EU) 2024/1689 (AI Act).” 2024.
- White House. “Executive Order 14110 on AI Safety.” October 2023.
- Carlini, Nicholas et al. “Extracting Training Data from Large Language Models.” USENIX Security. 2021.
- Shen, Xinyue et al. “Do Anything Now.” 2024.
- Anil, Cem et al. “Many-shot Jailbreaking.” 2024.
- Wei, Alexander et al. “Jailbroken: How Does LLM Safety Training Fail?” 2024.
- Salem, Ahmed et al. “Dynamic Backdoor Attacks Against Machine Learning Models.” 2020.
- Zou, Andy et al. “Universal and Transferable Adversarial Attacks.” 2023.
- Jain, Neel et al. “Baseline Defenses for Adversarial Attacks Against Aligned Language Models.” 2023.
- Tramer, Florian et al. “Stealing Machine Learning Models via Prediction APIs.” 2016.
- Rehberger, Johann. “EchoLeak: Zero-Click Prompt Injection in Microsoft Copilot.” 2025.
- Anthropic. “Claude Code Security Advisory: CVE-2025-59536.” 2025.
- GitHub. “GitHub Copilot Security Advisory: CVE-2025-53773.” 2025.
- NVIDIA. “AI Red Team Research Report.” 2025.
- Lakera. “Prompt Injection Report.” 2025.
- Robust Intelligence. “AI Security Assessment Report.” 2025.
Verified Sources (March 2026 audit)
- McKinsey Global AI Survey 2024 — confirms 72% AI adoption (not 78%)
- NVIDIA/garak on GitHub — confirms ~7,300 stars (not 15,000+)