locks attached to a public fence, city visible in the background
Feature

Addressing Cyber Risks of AI Collaboration Tools

5 minute read
Erica Sweeney avatar
By
SAVED
From roofing firms to coffee shops, businesses are rethinking cybersecurity in the age of AI.

ChatGPT, Slack and other artificial intelligence (AI) collaboration tools helped Lava Roofing Maui in Hawaii connect its employees and be more organized and efficient.  But owner and CEO Daniel Roberts didn’t anticipate that these platforms could bring cyber risks. 

“Security wasn’t top-of-mind right away, mainly because roofing and construction typically feel far removed from cyber threats,” Roberts said. 

That changed, however, when the company’s information technology team found unusual log-in attempts on its AI collaboration platforms. New employees also improperly shared sensitive customer details in open channels, Roberts added.

“Thankfully, no serious breaches occurred, but it was a clear wake-up call for us,” Roberts said. The company added multi-factor authentication across the platforms, which reduced unauthorized log-in attempts by 22% in the last year, and started holding quarterly cybersecurity training for employees. 

Lava Roofing’s story is common, as organizations invest more in generative AI strategies, especially for collaboration and productivity. A recent KPMG AI Quarterly Pulse Survey found that 82% of leaders expect risk management to be their biggest challenge with using GenAI. 

Here’s what to know about the cyber risks of AI collaboration platforms and what to do about them. 

What Are the Security Risks in AI? 

Leaking sensitive information is one of the biggest risks with collaboration platforms, said Vahid Behzadan, assistant professor of computer science and data science at the University of New Haven. Many organizations use these tools, along with GenAI, to collect and integrate data from multiple sources, and he said it’s not always clear how the software addresses data confidentiality. 

Integrating multiple tools without controlling who has access to them and the data they contain creates vulnerabilities, as well, Behzadan noted.  

Another issue is a prompt injection attack, where hackers disguise harmful prompts into a GenAI system to leak and spread sensitive data. This may come up with chatbots used to answer HR or other types of questions — the attack “might end up forcing that chatbot to exfiltrate data, execute malicious code or leak protected intellectual property,” Behzadan said.

AI misuse, meaning employees (or possibly a threat actor) use the tools incorrectly, intentionally or unintentionally, also puts organizations at risk, said Mohamed Gebril, associate professor in the Cyber Security Engineering Department at George Mason University. 

Too often, companies use GenAI and collaboration platforms to create efficiencies and boost productivity, but they don’t fully anticipate the cyber risk, Gebril said.

Riley Westbrook, co-owner of Valor Coffee in Alpharetta, Georgia, said he didn’t. 

Westbrook’s company uses Slack, Trello, ChatGPT and other tools for faster communication and data sharing, tracking products, writing training materials and connecting a team that spans two cafes and a coffee-roasting, catering and wholesale operation. 

“The risks became clear after a team member accidentally shared a confidential document in an unsecured Slack channel,” Westbrook said. Another employee used private hiring feedback in a ChatGPT prompt. 

“That’s when we knew we needed guardrails,” Westbrook said. The organization added access controls and software to secure devices and encrypt data. 

Increasing Artificial Intelligence Cybersecurity

Just like any other software, GenAI and AI collaboration platforms come with risks, Behzadan said. But there are several ways to protect your organization. 

Understand the AI Cyber Problem

“You need to be aware of the landscape of all the risks that come with these systems,” Behzadan said, and this is a “huge contributor” to mitigating those risks.

Testing your setup for vulnerabilities is the best way to know how secure it is, Behzadan said. One method is red teaming, which involves using ethical hackers to simulate a cyberattack. You could test your system internally, or hire a consultant or another third party to test it for you, Gebril said. Also, pay attention to and learn from problems that arise, Westbrook said. 

Train Your Team in Cybersecurity

Many data breaches and security issues stem from insider threats, including employee mistakes, Gebril said. That’s why training that’s engaging and interactive is crucial, he said.

Employee training raises awareness about the risks, how to use the software properly and how to handle data, Gebril said. Training should include helping employees recognize social engineering attacks — for instance, phishing scams that use fake messages or websites to trick people into sharing sensitive information or downloading malware. 

“Education is absolutely important, not just for security but also for usability,” Behzadan said. “Ensuring that employees are aware of the capabilities, but also the limitations of these systems, is important.” 

Roberts’ company’s quarterly training emphasizes “proper handling of client information and safe use of tools,” he said.

Valor Coffee’s training focuses on data security best practices, so employees know how to handle sensitive information, Westbrook said. The team is now much more aware of the risks, and they’ve had fewer instances of accidental data sharing since training started, he reported

Implement an AI Cybersecurity Plan

Effective cybersecurity measures vary from company to company. That’s why understanding your individual risk is the first step, Behzadan said. 

For example, Westbrook’s organization uses the cybersecurity platform CrowdStrike Falcon on staff machines, cafe terminals, laptops and other devices to monitor machines in real time and alert someone when something unusual happens, such as an internal misuse, ransomware attempt or backdoor installation, he said.

Learning Opportunities

The company uses the Palantir Artificial Intelligence Platform to monitor what GenAI tools generate. Before any AI-generated messaging goes out, the tool checks for sensitive data and blocks anything that looks risky, Westbrook said.

They’ve also added access controls to ensure that employees have access only to what they need for their role, Westbrook explained. 

“Without this setup, we’d be blind to what AI is exposing and how our systems behave under the surface,” Westbrook said. “These tools catch the stuff you don’t expect and stop it before it spreads.” 

Lava Roofing uses endpoint protection software and AI-driven vulnerability scanners to detect threats before they become problems, Roberts said. “Overall, our team feels more secure, clients are reassured and we’ve prevented potential issues from becoming bigger headaches.”

Keep Monitoring for Cyber Weaknesses

Continuing to monitor your vulnerabilities, cybersecurity plan and employee awareness helps keep your systems secure, Behzadan said. Make changes as new risks come up or as you add new platforms. 

Valor Coffee set up automated systems that monitor activity on Slack and other platforms and flag any suspicious or unauthorized sharing of sensitive data, Westbrook said. Since implementing its cybersecurity measures, the number of security problems has “dropped significantly,” he said.

The bottom line is that companies must be proactive when using GenAI and AI collaboration tools, Westbrook said. They’re useful but expose organizations across industries to new risks. 

“Don’t wait for something to go wrong before taking action,” Westbrook said. “Don’t just rely on the tools themselves to be secure. Make security a part of your routine.” 

Editor's Note: Read more AI security tips:

About the Author
Erica Sweeney

Erica Sweeney has been a journalist for more than 15 years. She worked in local media in Little Rock, Arkansas, where she lives, until 2016, when she became a full-time freelancer. Connect with Erica Sweeney:

Main image: Danny B | unsplash
Featured Research