Table of Contents
The democratization of AI has fundamentally lowered the barrier for threat actors, creating a bigger pool of people who can carry out sophisticated attacks. The so-called democratization of security, on the other hand, has resulted in chaos.
The problem
In an earnest attempt to shift left, security teams deputized developers to own remediation. While development teams have legitimately become more security-focused, it’s created a dynamic in which security is still accountable for risk but has no authority over the environment.
To regain authority, security teams need to own and handle threat verification and validation, asking nothing more of DevOps than to implement a well-researched, obviously necessary change.
Don’t get me wrong, left shifting security is a great idea. The problem with the model as it currently exists is that it didn’t consider the complexity and noisiness of cloud environments. Instead of scaling, this model is collapsing under the avalanche of alerts generated by cloud security tools.
DevOps teams simply don’t have the bandwidth to investigate the validity of risks they don’t own, and the process breaks down. Here’s an example of how the breakdown typically occurs:
- A security analyst receives an alert about a misconfigured cloud asset, an overly permissive IAM role, or a vulnerable container. After initial triage, they determine the finding requires remediation and create a ticket for the relevant DevOps team.
- The DevOps team, already juggling feature deployments, infrastructure maintenance, and performance optimization, receives yet another security ticket to investigate, usually under an SLA. They must context-switch from their responsibilities to understand the security implications of a finding for a risk that in many cases ends up being classified as a false alert.
- Meanwhile, the security team moves on to the next alert in an endless queue, while the DevOps team struggles to prioritize this security task against their existing workload.
The solution
AI has increased the velocity and volume of attacks, but to fight those attacks, CISOs must consolidate their power. They need a better way to trust and verify that increases collaboration between security and DevOps.
The most effective lever for doing this is lowering CNAPP noise and raising the bar of threat validation. That said, focusing on exploitable risk isn’t enough. Companies need to focus on contextual, weaponized risk. This is technically possible, but people and processes also need adjusting. We need a deeper technical analysis and a culture shift.
Here are three ways to facilitate the needed cultural transformation:
1. Reframe security’s role from gatekeeper to prosecutor. Focus on threat validation that gives DevOps teams winnable cases they can “convict,” rather than monitoring that buries them in noise. In other words, hand DevOps teams the “evidence” they need to win the case.
Instead of saying “this vulnerability and its cloud combination is theoretically exploitable,” security should demonstrate exactly how an attacker could weaponize those permissions to access critical systems. This will likely require new and better tooling, but before you start groaning over having to deploy yet another security tool into an already bloated cloud stack, think about the impact GenAI can have when applied to time-consuming, error-prone manual processes such as alert validation.
2. Reframe the nature and role of the security team. Security is not perceived as a business “enabler,” so to speak, but it has shifted from being seen as an agility-killing cost suck to a legitimate and necessary business function that balances risk with the cost of risk mitigation and time to market delays. While effective resilience is a long game, security teams need to get much better and faster when it comes to risk validation.
3. Adopt an attacker mindset in defense operations using the same automation and speed that makes threats so effective. Security teams should be running regular attack simulations and focusing their efforts on the attack paths that matter most. The goal isn’t just to identify vulnerabilities, but to understand which combinations of security gaps pose the greatest risk and why.
Why you should leverage AI
For more than 1000 years, military leaders have leaned into the belief that the best defense is a good offense, and I believe that’s the case with cybersecurity. But to remain effective security teams need to consolidate their power, not delegate it to DevOps teams, who are not equipped or incentivized to be the “army reserves” for the security team.
It’s one thing to ask DevOps to implement a change, but you shouldn’t delegate the drudgework of threat validation to them. Instead, steal a page from attacker playbooks and automate as much as you can, as soon as you can.
Manual, error-prone drudgework is a high-return use case for AI. Threat validation processes have been ripe for innovation for some time, and now we have the capacity to do it.