Shadow AI doubles every 18 months, creating blind spots SOCs never see

by CybrGPT
0 comment

Editor’s Note: This is the second part of a two-part story. Read part one here.

Deepfakes will cost $40 billion by 2027. AI agents are multiplying beyond control. Machine identities are exploding exponentially. Security leaders are racing to build defenses for threats that didn’t exist 18 months ago.

The CFO received the call at 3 a.m. The CEO’s voice was unmistakable, the accent, the speech patterns, even the nervous throat-clearing. The $1 million transfer was authorized immediately. By morning, the truth emerged: the CEO had been asleep in London. The voice was a deepfake. The money vanished.

This scenario plays out daily across enterprises worldwide. Deepfake attacks will cost organizations $40 billion by 2027. Technology that seemed theoretical two years ago now operates at an industrial scale.

Deepfakes represent just one dimension of the emerging threat landscape. The integration of gen AI into identity systems creates attack vectors that organizations are only beginning to understand. AI agents with broad permissions, machine identities multiplying beyond comprehension, shadow AI systems creating unauthorized accounts the tools meant to protect are becoming weapons.

The $40 billion deepfake crisis is accelerating

Statistics tell only part of the story. Persona’s 2024 Identity Fraud Report reveals they blocked 75 million deepfake attempts in hiring fraud alone—one vendor in one vertical. Extrapolating across industries suggests billions of annual deepfake attempts.

The evolution has been rapid. Deepfake incidents surged 3,000% in 2023. Contact centers experienced a 700% increase in voice-based attacks. By 2024, convincing voice clones required less than three minutes of audio—easily harvested from earnings calls, podcasts, or social media.

OpenAI’s GPT-4o security documentation now includes built-in deepfake detection capabilities. The fact that AI companies embed deepfake defenses directly into models indicates the threat’s scale.

In a recent Tech News Briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity practitioners defend systems while also commenting on how attackers are using it. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election, and threats posed by China and Russia.

“The deepfake technology today is so good. I think that’s one of the areas that you really worry about. I mean, in 2016, we used to track this, and you would see people actually have conversations with just bots, and that was in 2016. And they’re literally arguing or they’re promoting their cause, and they’re having an interactive conversation, and it’s like there’s nobody even behind the thing. So I think it’s pretty easy for people to get wrapped up into that’s real, or there’s a narrative that we want to get behind, but a lot of it can be driven and has been driven by other nation states,” Kurtz said.

Cristian Rodriguez, CrowdStrike’s field CTO for the Americas, added: “Deepfakes, AI agents, shadow AI – these aren’t edge cases anymore. They’re today’s attack surface. The old model of quarterly access reviews or static policies simply can’t keep up with machine-speed threats. We need AI defending against AI, with humans setting strategy instead of chasing alerts.”

Research on adversarial AI documents an era of “shallow trust” where no digital interaction can be taken at face value. Business email compromise attacks of the past decade will seem primitive compared to emerging threats.

AI Agents: The ungoverned attack surface

Every AI agent represents a superuser with persistent system access. Unlike humans who log in and out, AI agents maintain continuous connections. Unlike traditional service accounts with limited scope, AI agents require broad permissions for functionality.

Machine identities already outnumber humans 45:1. AI agents accelerate this explosion exponentially. Typical enterprise ChatGPT deployments create dozens of agents, each requiring identity, credentials and access rights. Across Claude, Gemini, Copilot and proprietary systems, organizations suddenly manage thousands of AI identities with minimal oversight.

Attack scenarios are already materializing. Attackers compromised an AI agent with access to a company’s entire knowledge base. Rather than stealing data directly, which would trigger alerts, they poisoned the agent’s responses, subtly feeding misinformation to employees over weeks. Critical business decisions were made based on corrupted intelligence before discovery.

Machine identity proliferation: The emerging attack surface crisis

Machine identities represent cybersecurity’s most underestimated threat vector. Organizations now manage 45 times more machine identities than human ones, with total identities expanding 240% annually. This exponential growth invalidates traditional IAM architectures.

The operational reality exposes critical gaps. Containers often terminate in less than 5 minutes, yet they spawn credentials, authenticate and access resources before traditional IAM systems register their existence. Ivanti’s Karl Triebes confirms: “Traditional IAM systems can’t even detect these identities.”

Scale compounds vulnerability. Enterprises maintain 15,000+ service accounts (92% orphaned), 25,000+ API keys (67% never rotate) and 50,000+ certificates (40% self-signed). CyberArk data shows 68% of breaches exploit non-human credentials. SolarWinds demonstrated the cascade effect—one compromised service account triggered enterprise-wide failure.

Leading organizations deploy automation. Venafi’s TLS Protect maps certificate infrastructures in hours, preventing 89% of certificate-related outages.SPIFFE/SPIRE frameworks deliver cryptographic workload identities that auto-rotate and terminate with workloads, eliminating static credential accumulation.

Market dynamics validate urgency. Machine Identity Management reaches $5.13 billion in 2024, expanding to $14.81 billion by 2032 at 14.19% CAGR.Gartner analysis shows organizations without automated MIM face 4x higher breach probability. Implementation delivers measurable returns: 73% reduction in credential incidents within six months.

Rodriguez added, “The rise of machine identities is a wake-up call. When you have 45 service accounts for every employee, you can’t secure them with legacy IAM. If you don’t have visibility into every identity — human, machine, and AI — you’re flying blind. That’s where identity security has to go: real-time, automated, and unified across domains.”

Machine identity management represents the next critical security investment. Organizations addressing this gap achieve competitive advantage through reduced breach exposure and operational efficiency.

Shadow AI: The $4.63 million breach multiplier hiding in plain sight

Shadow AI now costs enterprises $4.63 million per breach, 16% above average, yet 97% of breached organizations lack basic AI access controls, according to IBM’s 2025 data.

“We see 50 new AI apps a day, and we’ve already cataloged over 12,000,” Itamar Golan, CEO of Prompt Security, told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore,” Vineet Arora, WinWire CTO, told VentureBeat in a recent interview. “Suddenly, you have dozens of little-known AI apps processing corporate data without a single compliance or risk review,” Arora warned.

VentureBeat’s recent analysis quantifies the accurate scale of Shadow AI:

Shadow AI Category

Active Apps Q2 2025

Primary Tools

Research Assistants

15,000

Claude 3, Gemini Pro, Search APIs

Financial Models

18,000

Monte Carlo, Gemini + Python

Workflow Automation

13,000

Python, Sheets, Zapier

Pitch Automation

12,000

GPT-4, Gemini, Colab

Total Verified

74,500+

Based on 5% monthly growth, shadow apps could double by mid-2026. Cyberhaven data reveals 73.8% of ChatGPT workplace accounts are unauthorized. Enterprise AI usage grew 61x in 24 months. One Fortune 500 CISO who spoke on condition of anonymity nailed it: “It’s like trying to inventory smoke.”

Traditional security fails here. “Most traditional management tools lack comprehensive visibility into AI apps,” Arora explained to VentureBeat. His governance framework addresses this: Create an Office of Responsible AI, deploy AI-aware security controls and apply zero trust to AI architectures.

“Total bans often drive AI use underground, which only magnifies the risks,” Arora emphasized. “You can’t kill AI adoption, but you can channel it securely.”

The EU AI Act “could dwarf even the GDPR in fines,” per Golan. Yet prohibition fails. “Once employees have sanctioned AI pathways and clear policies, they no longer feel compelled to use random tools in stealth,” Arora confirmed.

Strategic imperatives for Security Leaders

After 18 months of research and analysis of dozens of breaches documented in IBM’s 2024 Cost of a Data Breach Report, clear imperatives emerge.

Assume any single identity compromise. Design systems that limit blast radius rather than prevent every breach. Eric Hart, Global CISO of Cushman & Wakefield echoed this philosophy: “It’s not about not having any security events. It’s about minimizing damage when they inevitably occur.”

Invest in identity visibility before adding security tools. Organizations cannot protect what they cannot see. Success requires complete visibility into all identities—human, machine, and AI—before attempting governance.

Prepare for deepfakes as existential threats, not edge cases. Every organization needs deepfake defenses immediately. The $40 billion in projected 2027 losses will come from organizations that waited.

Govern AI agents before they govern themselves. The control window is closing. Once AI agents become autonomous enough to resist governance, it’s too late.

Accept traditional security model obsolescence. Static policies, periodic reviews, and human-scale governance cannot function where millions of identities operate at machine speed. The future requires AI defending against AI, with humans setting strategy rather than managing implementation.

The evolution imperative

The transformation of identity security through gen AI represents cybersecurity’s inflection point. Organizations harnessing these capabilities while managing risks will thrive. Those that don’t become casualties of security’s most significant shift since the Internet.

The tools organizations need already exist, from CrowdStrike’s Falcon platform, CyberArk’s Identity Security Platform, ForgeRock’s Autonomous Identity, Ivanti’s Neurons, Microsoft’s Security Copilot, Okta’s Identity Cloud, Palo Alto Networks’ Cortex, SentinelOne’s Singularity, to Venafi’s Control Plane. Security leaders have the resources to counter deepfakes, ungoverned AI agents, and exploding machine identities, but the time to act strategically is now.

Hart’s wisdom resonates: “Security is about continual maintenance, that evolution. How can you look at and use the things you have in new or different ways?” In an era where AI agents proliferate, deepfakes destroy trust, and machine identities outnumber humans exponentially, evolution isn’t recommended—it’s mandatory for survival.

The race between AI-powered attacks and AI-powered defenses will define cybersecurity’s next decade. Winners will recognize identity security isn’t just about managing access anymore—it’s about governing an ecosystem of human, machine, and AI entities operating at unprecedented scale and speed.

The transformation is here. The risks are real. The opportunity to lead rather than react narrows. What happens next depends on the decisions security leaders make in the next 18 months.

Source link

You may also like

Leave a Comment

Stay informed with the latest cybersecurity news. Explore updates on malware, ransomware, data breaches, and online threats. Your trusted source for digital safety and cyber defense insights.

Weather Data Source: 30 tage wettervorhersage

Subscribe my Newsletter for new blog posts, tips & new photos. Let’s stay updated!