A group of triangle lights.
Editorial

3 Steps to Securely Leverage AI

3 minute read
Illia Sotnikov avatar
By
SAVED
Which cybersecurity model should companies follow for AI?

Artificial intelligence (AI) is now a key part of many essential business processes and daily consumer use cases. Its operations and outputs are becoming business critical, often having access to sensitive and regulated data. However, AI technology is quite different from the familiar IT stack enterprises are used to securing. And with its rapid emergence and business reliance, many organizations are now struggling to fully understand their AI inner workings and dependencies.

These factors make AI a new and attractive target for bad actors. Security teams are racing to discover ways to ensure the security of data accessed or generated by AI-powered tools. Until then, let’s explore how AI technology can be secured using the CIA triad model: confidentiality, integrity and availability.

1. Data Confidentiality

Like it or not, your organization’s users are almost certainly using third-party AI tools and plug-ins. While many know AI best practices, the perceived confidentiality of a quick answer or friendly chat with any AI tool can lead to misplaced trust.

Third-Party AI

Businesses can mitigate the issue by taking a proactive approach to employee training. Update your organization’s security training with AI messaging, and train all employees to maintain compliance and security while using AI. Include guidelines and educate your team about the dangers of sharing sensitive information, preventing identification by anonymizing the information and exercising skepticism with the advice given. Remind your employees that these tools are exactly that — they are not yet capable of real intelligence but are sophisticated data analysis tools.

Internal AI

It is important to recognize that even internal AI systems, like Microsoft 365 Copilot, don’t guarantee data confidentiality. An internally deployed AI model typically has access to private and confidential data, and you must ensure it does not become a vulnerability. For example, unauthorized users could gain access to sensitive information. This is because new documents generated by Copilot do not inherit any sensitivity labels from the source documents. Furthermore, Copilot relies on the permissions assigned in Microsoft 365. If users have been granted inappropriate access to content, then sensitive information can quickly spiral out of control.

To address these challenges, look for ways to leverage familiar security controls. Start by maintaining a least-privilege approach to data access rights. Implementing automated data discovery and classification is vital to ensure accurate and timely labeling of newly generated content, so proper security controls can be applied to maintain confidentiality.

Related Article: Generative AI Is Changing Work. Your Cybersecurity Training Should Change With It

2. Data Integrity

We don’t know how exactly AI models come to specific conclusions. Most have seen AI decisions that range from better-than-average advice to their human counterpart to poor judgment and even hilariously wrong responses. And since the AI decision-making process is a black box for us, it will be hard to tell when outputs might have been manipulated to an adversary’s advantage. So what do we do?

Verify Decisions

One strategy for ensuring trust in the integrity of AI systems is to verify their decisions. For example, you can have human auditors examine a sample of AI outputs monthly or after a certain number of transactions. Manual inspection can uncover errors, biases and unexpected outcomes.

Secondary System

In addition to human monitoring, there are significant benefits to adding another AI model to oversee a more scalable but more complex approach. That is, a secondary AI scrutinizes the decisions of the primary AI, searching for irregularities, biases and departures from the norm. A dual-layered approach combines human insight with AI efficiency, providing a balanced mechanism for monitoring AI applications.

Proactive Controls

You should also consider more proactive controls, such as filtering and sanitizing both input and output for the AI models. In particular, it’s important to uncover injection attacks, where threat actors hide malicious code within otherwise legitimate input data from a client to the application. Unlike traditional code, AI language models operate within a natural conversation. This absence of definitive syntax makes security more complex. You must continuously improve filters to identify potentially malicious input through a chatbot on your website or an AI tool implemented by the R&D team. A more complex secondary AI approach may also help establish typical input and output patterns and user behavior and then flag or block deviations from those baselines.

3. Availability

Lastly, you should consider the availability of both AI models and the systems and processes they are enabling. Ask yourself and your IT team:

  • What happens if a system is overloaded by unnecessary or unauthorized requests or maliciously crafted requests that use a substantial amount of computing power?
  • How does this impact the rest of the AI pipeline?
  • What impact will it have on your customers and the business?

Some of these questions can be answered through security controls, such as comprehensive access management and high-availability deployments, in addition to the input filtering we touched on above.

Learning Opportunities

Related Article: Enterprise Security 2.0: How AI Is Changing the Game

In Conclusion

AI industry experts say we are still in the early stages of AI development. This suggests that our understanding of the risks and how to mitigate them is still developing. However, using the CIA triad, your current security knowledge and many existing security controls will create a strong foundation for securing AI-powered systems and processes.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Illia Sotnikov

Ilia Sotnikov is a security strategist and VP of user experience at Netwrix, a cybersecurity company based in Frisco, Texas. He has over 20 years of experience in cybersecurity and IT management during his time at Netwrix, Quest Software and Dell. In his current role, Sotnikov is responsible for technical enablement, UX design and product vision across the portfolio. His main areas of expertise are data security and risk management. Connect with Illia Sotnikov:

Main image: By Dario Raijman.
Featured Research