fishing hook made of computer wires, symbolizing a phishing attack
Feature

Generative AI Is Changing Work. Your Cybersecurity Training Should Change With It

3 minute read
David Barry avatar
By
SAVED
Gen AI promises to lighten workloads, but it also changes the nature of cybersecurity risks. Teach employees how to protect themselves — and the organization.

An estimated 5.5 million people worked as cybersecurity professionals as of October 2023, according to non-profit cybersecurity professionals organization ISC2.  This marked an 8.7% increase over 2022, and was the highest number of people recorded working in digital security. Yet demand for cybersecurity professionals still outpaces the supply.

In fact, another four million cybersecurity specialists would be required to fill that skills gap globally, according to the report.

As adoption of generative AI continues to grow, organizations are feeling the pressure to up their security game. So while the cybersecurity worker shortage may continue, there are a few steps companies can take to help workers protect themselves and the organization, while taking some of the pressure off of IT managers. 

What Employees Are Already Learning About Generative AI Threats  

Companies are already offering employees security training in the face of generative AI threats, said StrikeReady chief product officer Anurag Gurtu. In particular, companies are doubling down on training around data classification and access management, teaching employees about the sensitivity of the data used by AI systems and the importance of strict access controls.

Many training programs now include modules on the specific risks associated with generative AI, such as data poisoning and model manipulation, Gurtu continued. He also pointed out that companies are using simulated cyberattacks involving AI to train employees on rapid response and decision-making in scenarios where AI systems might be compromised.

What it comes down to is creating a workplace culture where cybersecurity is everyone's responsibility, he said. This in turn can lead to more vigilant behavior and pro-active reporting of suspicious activities.

“As AI systems become more integrated into business processes, it's crucial to train employees on the ethical implications of AI, particularly in handling and processing data,” he said.

“Cybersecurity landscapes evolve rapidly. Ongoing education and keeping abreast of the latest AI developments and threats is essential.”

Related Article: Here's Where to Start With Your Cybersecurity Program

Rethinking Cybersecurity

Christopher Gustafson, director of cloud services at Oshkosh Corporation, said companies including his own are rethinking their cybersecurity training to fortify defenses against the risk of security breaches engendered by generative AI.

Gustafson echoed several of the points Gurtu raised, advising organizations to look specifically at:

1. Data Sensitivity and Access Control

The expansive data needs of generative AI necessitate stricter data sensitivity awareness and access control training, he said. Employees need to be educated on the types of data that generative AI systems access and the potential risks involved. Enhanced training focuses on rigorous data categorization, access limitations, and the importance of following strict protocols when managing sensitive information.

2. Advanced Threat Simulation

With the complexity of threats posed by generative AI, cybersecurity training needs to include advanced threat simulations. These simulations are tailored to mimic potential AI-related breaches, providing employees with hands-on experience in identifying and responding to sophisticated attacks, Gustafson explained. This approach helps build a more resilient and responsive workforce.

3. AI Ethics

An emerging aspect of training involves AI ethics and its intersection with cybersecurity. As generative AI can potentially be used unethically, he said employees should be trained to recognize and report unethical AI usage, understanding the broader implications of AI manipulations on data security and privacy.

Related Article: AWS's Diya Wynn: Embed Responsible AI Into How We Work

4. Human Error

Human error is a significant factor in data breaches. Modern cybersecurity training therefore places a greater emphasis on behavioral aspects. This includes training employees to be vigilant about phishing attacks, social engineering tactics and other methods that might exploit human vulnerabilities, especially in the context of large-scale data exposure.

5. Continuous Learning

The rapidly evolving nature of generative AI technology necessitates continuous learning programs. Regular updates on the latest AI security trends and threats ensure employees' knowledge remains current and effective against emerging vulnerabilities.

“As companies integrate generative AI into their operations, it's crucial to adapt cybersecurity training to meet these new challenges,” Gustafson said. "By focusing on data sensitivity, advanced threat simulations, AI ethics, human error, and continuous learning, organizations can enhance their defenses against the unique threats posed by this transformative technology."

Related Article: Are You Giving Employees Guidelines on Generative AI Use? You Should Be

Learning Opportunities

Train on Generative AI Risk Before Generative AI Use

Portal26 CEO Arti Arora Raman said it is not so much that training policies need to change — it is more about real training happening in the first place.

She cites her company's State of Generative AI survey to support her argument. The survey found 60% of companies provide less than five hours of training to employees a year.

That is where the problem lies, she said. She recommended companies start by teaching employees what Generative AI is, how it works, and what the risks are — before teaching them how to extract the most value from GenAI tools. The next step is for employees to understand and acknowledge the governance policies of their organization, which are attuned to the risks facing the company.

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Kasia Derenda | unsplash
Featured Research