coin toss
Feature

The Double-Sided Coin of Using Generative AI for Cybersecurity

4 minute read
David Barry avatar
By
SAVED
Are you feeling lucky?

Generative AI is being touted as a valuable tool for organizations looking to improve their cybersecurity initiatives.

According to the State of AI and Security Survey Report, conducted by the Cloud Security Alliance (CSA) and Google Cloud and published earlier this month, nearly two-thirds (63%) of security professionals believe in generative AI's potential to improve organizational security, mainly as a way to identify emerging threats and enable rapid responses to crisis.

But amid mounting concerns over the still-unknown risks of GenAI, the report also suggests exploring the space with caution.

The Adoption Paradox

“The journey towards integrating AI into security workflows is fraught with obstacles, including the need to mitigate dual-use concerns, bridge skill gaps and encourage appropriate reliance on automated systems,” the report reads.

The warning may be warranted. According to the data, more than half of the 2,486 organizations surveyed (55%) said they are planning to implement generative AI security solutions this year.

Other research, carried out by the Enterprise Strategy Group last November across 370 organizations of up to 10,000 employees, found that while 87% of security professionals recognize the potential of generative AI for cybersecurity, many are aware that the very technology that can help them can also be used by attackers to mount increasingly sophisticated cyberattacks, not to mention the risks associated with potential biases and ethical considerations.

In other words, your best defense may turn out to be your biggest vulnerability.

David Pumphrey, CEO of Riveraxe, which develops information and data management solutions for the health industry, said the reality of GenAI for security is twofold:

On one hand, AI can significantly improve cybersecurity protocols by automating threat detection, enhancing data analysis and predicting future threats with greater accuracy and speed than traditional methods. For instance, in deploying AI-driven solutions in securing health data, Riveraxe was able to detect anomalies in system behavior and potential breaches much faster, enabling proactive rather than reactive responses. This kind of automation and predictive capability is invaluable in an environment where threats evolve rapidly.

On the other, while these capabilities are useful in industries that are dealing with large amounts of sensitive data, the augmentation of cybersecurity with AI does bring its own set of risks and challenges. AI systems, by virtue of their complexity, become attractive targets for sophisticated cyberattacks, designed to exploit weaknesses in AI models themselves, Pumphrey said. In the case of Riveraxe, this becomes a concern when integrating AI with existing electronic health records, where the security of both patient data and the AI systems are paramount.

Related Article: Despite the Buzz, Executives Proceed Cautiously With AI

The Need for an Integrated, Multi-Discipline Approach

This does not mean companies should turn away from AI's ability to boost security. Instead, Pumphrey says companies need to balance the benefits of AI with its potential vulnerabilities by using a dynamic strategy that emphasizes security by design, continuous monitoring and updating of AI systems, as well as education of all stakeholders about the potential risks and safeguards.

“Such a strategy requires collaboration across multiple disciplines, encompassing IT, cybersecurity, healthcare professionals and AI developers, to ensure a unified approach to using AI in enhancing cybersecurity defenses,” he said. “The potential for AI to revolutionize cybersecurity is immense, but it must be approached with caution and responsibility to truly harness its benefits while safeguarding against its risks.”

In the end, Michelle Pruitt, director of engineering strategy and planning for Microsoft, which has invested heavily in AI through OpenAI, said they are practical advantages of bringing generative AI into the cybersecurity mix. In particular:

  • Automated Security Protocols: GenAI can automate the creation and updating of security protocols, which will make it easier to keep pace with evolving and investing threats. This, she said, includes generating complex security configurations and related policies tailored to specific systems.
  • Enhanced Threat Detection: With the ability to analyze vast amounts of data, AI can identify patterns that indicate or identify malicious activity more efficiently; AI can be trained to simulate various attack scenarios, which in turn help enhance the predictive capabilities of threat detection systems and processes.
  • Incident Response: AI can provide real-time incident responses by generating solutions to security breaches as they happen, she said. AI incident response can potentially reduce the damage caused by such incidents.

Related Article: Generative AI Is Changing Work. Your Cybersecurity Training Should Change With It

GenAI as an Attack Tool

Prompt Vibes founder Dhanvin Sriram said while the benefits of AI for security are real, companies still need to be cautious and acknowledge that generative AI also introduces certain risks and challenges. The technology should be implemented thoughtfully and carefully to reduce potential risks, he said.

One of the key concerns he noted is the ability for malicious actors to leverage generative AI techniques to develop sophisticated cyberattacks, such as creating convincing deep fakes for social engineering or generating malware that can evade traditional detection methods. The complexity of generative AI algorithms, he said, may lead to vulnerabilities in the systems they are integrated into, but they can potentially open new avenues for exploitation.

“Organizations will have to prioritize strong security measures to deal with these challenges, including proper testing and validation of generative AI applications, implementing stringent access controls and continuously monitoring for anomalous behavior,” he said.

He believes investing in cybersecurity education and training programs can empower employees to recognize and respond effectively to emerging threats posed by generative AI. “I believe that while generative AI can improve cybersecurity, its implementation must be approached with caution and accompanied by comprehensive risk management strategies to ensure its benefits outweigh its potential risks,” he said. 

Kelwin Fernandes, CEO of NILG.AI, is less optimistic about the possibilities inherent in the technology. “I do not foresee GenAI being a major helper in cybersecurity," he said. "At best, it could help you identify potential impersonations by analyzing the tone, pitch."

In his view, the way organizations think about security is more in a probabilistic manner (the probability of an event being a threat), balancing the trade-off between preventing a well-intended behavior vs allowing an attack. This kind of probabilistic modeling is more aligned with traditional machine learning algorithms.

Veteran tech writer Bill Mann said he doesn't believe AI will make a big impact on organizational cybersecurity because of the potential vulnerabilities it will create.  

Learning Opportunities

"We will keep plugging holes in the ship, and they will start spurting water. AI will get to a point that its very useful for that, but hackers will find an unseen vulnerability in the AI itself — rinse and repeat," he said. "It’s always been an up and down, back and forth battle. AI will keep us ahead for a time but not forever."

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Nicu Buculei | Flickr | CC BY-SA 2.0 DEED
Featured Research