While it’s true that GenAI helps criminals to create phishing emails or places the ability to rapidly create software exploits into the hands of more people — there are also expectations that AI/ML, and now especially GenAI, will help enterprises to improve their security posture. While no one knows how much change GenAI brings to enterprise cybersecurity, many, if not most, experts agree that CxOs need to understand change is coming and will likely improve the security posture of their organizations.
CxOs are coming to see the possibilities. According to the IBM Institute for Business Value, 52% of executives say GenAI will help them improve aspects of their security operations. EY’s 2024 Human Risk in Cybersecurity Survey finds that 93% of enterprises currently use, or plan to use, AI/ML to bolster their defenses and response.
Security vendors are making investments to bring GenAI into their toolsets. Earlier this year, Microsoft released Copilot for Security, while Google released Gemini for security operations and its Splunk AI. Security vendors claim these AI features accelerate security analyst training, improve understanding of security threats and speed threat hunting and incident investigations. Exabeam introduced and released its copilot, or “generative AI cybersecurity virtual assistant,” earlier this year. Exabeam’s generative AI capabilities include natural language assistance so that analysts can create dashboard visualizations without having to write queries. It also streamlines report generation and indicator of compromise detection.
While there’s a lot of hope — and hype — for AI/ML and, more recently, GenAI, there’s also increasing skepticism as to how much these tools will be able to achieve. “I found many security practitioners are excited about the potential for generative AI. We’re starting to feel slightly disillusioned about what we can do with this technology. Part of the reason for that is because a lot of the [AI/ML] features have been over-promised and under-delivered,” Allie Mellen, a principal analyst at Forrester, said during a recent webinar, The Truth Behind ML’s Madness: How AI Is Actually Used In Detection And Response.
So far, the most significant demand for GenAI models has been pre-built models built into enterprise security tools and platforms, but some enterprises are also creating custom models. According to Mellen, enterprises must be careful when building their AI machine-learning models. “I will caution you, it is difficult. There is a reason that this has been productized. That’s because even though you can get specific with your use case, it also requires a big skill set that you may not have, or maybe you have it temporarily, but you don’t have it in the long term,” Mellen said.
This article is focused on what CxOs need to understand about how pre-built cybersecurity AI models are currently changing, or soon will change, the security operations within their organizations:
They promise to enhance threat detection and response: AI-enhanced cybersecurity tools can analyze more data quickly than human analysts ever could. That means tools like traditional machine learning and analytics can identify potentially more threats and even more elusive malicious patterns within applications, infrastructure and security logs.
CxOs hope that these investments in GenAI will help reduce risk and human analyst alert fatigue.
Anton Chuvakin, security advisor at the office of the CISO, Google Cloud, says AI will continue to improve and help security analysts quickly put threat intelligence to work. “I want to take a bit of intelligence and point it to a machine, point it at the context of my environment, and have the AI create detection rules that are the ideal way to block that attack,” he says.
They are automating routine tasks. Security teams are starting to benefit from GenAI-driven automation of routine security tasks. To date, the automation of such tasks has been relatively low-hanging fruit, such as generating incident reports, log analysis, and automating basic security playbooks. By automating these types of functions, security teams can hopefully focus on more strategic or pressing tasks.
“Generative AI is making security teams reconsider their current workflows and how they can leverage this technology to transform,” says Dave Gruber, principal cybersecurity analyst at Enterprise Strategy Group. “Automation has been big on the agenda for many years now. GenAI is now making people rethink automation even further. What does automation mean in this new world where AI can understand a situation on the fly and make adjustments so I don’t have to create overly designed processes to automate? The AI will do more of that.”
While not here yet, Mellen advises executives to monitor agentic AI closely. These “Agent AIs are small, dedicated agents that excel at a particular task and can work together to perform a series of tasks, including even more complex ones. “There’s a lot of potential for this to be effective,” Mellen says. “Agentic AI is worth exploring and seeing the direction that it’s going, while also being aware that it’s not going to be the end all and be all and solve every problem in security operations,” Mellen said.
Enhancing user experiences: GenAI is also being used to create what security vendors say are more intuitive interfaces. These conversational interfaces allow security professionals to interact with complex systems using natural language queries instead of clicking boxes and more structured search techniques.
The experts we interviewed cite how the GenAI natural language interfaces have changed how security professionals interact with their security tools. They can more easily extract insightful information through straightforward conversations rather than manual searches.
Gruber says that GenAI is particularly revolutionary for security operations teams because security analysts can now get real-time responses about security incidents. After all, these systems understand the context of the environment and the threats and provide remediation guidance without the analysts having to navigate multiple dashboards. “Natural language interfaces powered by AI are making it easier to create visualizations and dashboards. Analysts can now simply describe what they want in plain English rather than having to configure complex queries manually,” adds Exabeam’s Steve Wilson, chief product officer at Exabeam.
However, not everyone is on board with chatbots improving analyst experience. “Consumers [of security technology] want convenience, not conversations, and they do not have time to sit around and interact with a chatbot and try to figure out what questions to ask,” Mellen adds. Mellen also cautions having security analysts use a separate chatbot interface adds unnecessary friction and goes against the goal of improving usability.
Enabling predictive capabilities: AI models are being developed to predict potential threats and vulnerabilities, moving security postures from reactive to proactive. What does that mean? It means that security teams can better forecast potential threat actor activity and attack vectors. That gives security teams advanced warning to hopefully harden areas with a high probability of exploitation.
Because GenAI can understand the nature of threats, vulnerabilities and the environment, it can help provide insights into how to mitigate the best-emerging threats and where unknown threats may be most likely to target an environment. Gruber says these new AI tools enable security teams to quickly answer many questions, such as where to take for mitigation. “It’s not 100%, but it takes much of the legwork away from what security analysts used to have to do,” says Gruber.
Agentic AI may be transforming security operations: The integration of GenAI is pushing toward more autonomous security operations, with AI agents potentially taking actions based on high-certainty signals under human oversight, explains Wilson.
With security agents, or “agentic AI,” organizations hope to soon gain a substantial shift in operational efficiency through improved security data analytics and increased automation. AI is freeing security operation’s teams to focus on more strategic security objectives such as threat modeling new initiatives, securing new applications and paying down technical debt related to security tools and processes.
Some enterprises that have adopted GenAI within their security operations teams report having compressed hours of low-level security tasks into minutes to help reduce alert fatigue and staff burnout. But don’t expect AI to fully replace human analysts any time soon. However, Chuvakin explains that AI could help to scale the effectiveness of the security operations team faster than the growth of their enterprises and cyber threats. “AI will enable more companies to not scale humans at the rapid pace they have been,” he says.
If GenAI does prove to be a force multiplier for security operations, it will undoubtedly be welcomed by enterprises that have struggled to maintain pace with the rising cybersecurity threats and growing dependence on digital technology.
That could explain why investment in GenAI in cybersecurity will increase from zero a few years ago to, according to Prophecy Market Insights, $6 billion by 2034.