As we commented a couple of months ago, generative AI has been emerging as a weapon for hackers, making phishing attempts (emails or texts inviting a user to open a malicious link to steal his/her credentials) bespoke and then much harder to detect, and helping to reduce malware software development cycles and/or rewrite code to avoid detection.
But cybersecurity firms are not staying idle and are determined to leverage as well generative AI. Actually, artificial intelligence has already been used by cybersecurity vendors over the years for analyzing data traffic and detecting patterns and threats, and generative AI represents the next step, with many benefits in sight. First, the technology can materially enhance the scanning and filtering of security vulnerabilities in various languages and add context to security analysts. For instance, it can improve protection against phishing emails by detecting untypical email address senders or their corresponding domains and by monitoring whether the links in text lead to malicious websites.
Then, companies and organizations can also materially improve response times by leveraging large language models (LLMs) to create threat-hunting queries, allowing security analysts to swiftly identify and mitigate potential threats. For instance, generative AI can be used to address security risks in a company’s supply chain, allowing staff to ask questions about their business ecosystem and whether their vendors have suffered a security breach.
Overall, generative AI will act as an assistant, providing suggestions, analysis and recommendations, and humans will retain authority over decision-making.
Finally, AI can materially enhance the productivity of security analysts within customer companies/organizations, hence helping ease the staff shortage issue amid a relentless increase in the number of cyberattacks. The automation of certain tasks (log analysis, patch management, incident reporting…) is a major selling point for cybersecurity vendors as it should help customer companies keep control of their security workforce costs. When considering that spending on cybersecurity personnel is close to $400 billion a year (according to Morgan Stanley) and that AI productivity gains in software-related businesses are estimated at least around 25%, the automation market opportunity appears massive: roughly $100 billion, out of which cybersecurity software vendors offering AI platforms could capture 30 or 40% or $30-40 billion a year. That would give a c.20% boost to an already fast-growing cybersecurity market (currently sized around $170 billion) and to vendors’ pricing power and margins.
It’s then no wonder that most cybersecurity software vendors have been infusing or announcing generative AI features in their security offerings in recent months, suggesting that the technology is about to become mainstream in cyber defense.
Notably, SentinelOne unveiled a threat-hunting platform that aggregates and correlates information from device and log telemetry across endpoint, cloud, network and user data. The AI platform delivers security assessments in response to simple text prompts such as “find potential successful phishing attempts involving X”, including a summary of results in jargon-free terms accessible to most users. It can also recommend response actions, such as “disable all endpoints” that can be immediately executed through a simple click.
CrowdStrike released Charlotte AI, that seeks to democratize access and understanding of threat intelligence within a company and to empower less experienced IT and security professionals to make better decisions faster thanks to the use of a large language model (LLM) allowing users to simply create natural language prompts.
Among the larger players, Google announced Cloud Security AI Workbench, a cybersecurity suite based on a language model called Sec-PaLM, customized for security use cases and incorporating research on software vulnerabilities, malware, threat indicators and behavioral threat actor profiles. Google’s AI platform powers Mandiant’s Threat Intelligence AI tool that finds, summarizes and acts on security threats, VirusTotal that helps analyze and explain the behavior of malicious scripts and Chronicle, that searches security events and allows a conservational interaction with the results. Finally, the platform provides users of Google’s Security Command Center AI with accessible explanations of attack exposure including impacted assets, recommended mitigations and risk summaries for security, compliance and privacy findings.
And, unsurprisingly, Microsoft is among the most active companies when it comes to merging AI with cybersecurity and recently unveiled its Security Copilot that leverages its extensive global threat intelligence, consisting of over 65 trillion daily signals, to provide advanced analysis to aid in real-time threat detection and response.
For the companies that have been relatively quiet in terms of product announcements, the wait should not be long. As an illustration, Palo Alto plans to integrate generative AI into most of its products within a year, based on a proprietary large language model.
Like in any other industry, the rise of generative AI will favor the companies with the largest data sets and the sufficient financial resources to train AI models specifically designed for cybersecurity, suggesting that the more established cybersecurity players are likely to gain share: Fortinet, Palo Alto, CrowdStrike, Cloudflare… Tech giants Microsoft and Google that have massive amounts of data, AI skills and that have accelerated their security investments in recent years, should also play a major role going forward.
In conclusion, AI developments are likely to enhance the overall effectiveness of cyber defense strategies, help organizations stay ahead of evolving threats in the digital landscape and give a welcome boost to vendors’ top-line growth and margins.