fire engine red mailbox
Feature

AI Safety in Internal Comms: Ensuring Data Integrity and Security

4 minute read
Mary C. Long avatar
By
SAVED
The path to GAI adoption at organizations is ill-defined, but there are ways for internal communicators to tap into the capabilities with more assurance.

AI has permeated every sector of society, but its adoption has been far from orderly. According to Gallagher's 2023/24 State of the Sector report, AI implementation is the Wild West of our era, with few companies following a plan. 

In fact, 71% of those surveyed for the report said they do not provide internal communications professionals guidance on when, where or how to use AI. This does not bode well for data integrity and security. It’s time to take a step back, survey the land and map out next steps before building critical processes on a shaky foundation.

Confronting Underlying AI Challenges

There’s a long list of challenges to confront when it comes to AI implementation, from system integration and data privacy to employee training and creating baselines for usage. For those reasons, Cheraé Robinson, head of community at Flybridge Capital Partners, said a systematic approach is essential. “One that builds redundancies with safeguards to avoid potential pitfalls, paving the way for scalable implementation,” she told Reworked.

This starts with understanding how AI can help you. 

Laurel Dzneladze, product marketing manager consultant at Microsoft, said communicators are often asked to help their organization by unlocking value through AI. So, the first challenge is “training communicators to recognize AI opportunity,” which she sees as a thought starter for frameworks, summarizing and automation. When harnessed correctly, “AI can surface information you might not have uncovered otherwise,” she said. 

However, as ClearBox Consulting's Suzie Robinson pointed out, many people simply don't know how they feel about AI, which complicates its adoption. She cited the IC Index 2024, which documents those mixed feelings at play. For example, one-third of employees distrust the internal messaging written by AI. 

“Keeping the use of generative AI a secret could be the answer,” Robinson said, “but this opens ethical questions around transparency and, should the secret be discovered, this could cause even more harm.”

Increasing comfort with AI-driven communications requires exposure to the technology. But how can this happen when ChatGPT is frowned upon and companies aren’t ready to invest in more secure generative AI (GenAI) solutions without proof points?

Related Article: Using Generative AI to Enhance Unified Communications Capabilities

Generative AI Capabilities at the Crawling Stage in Most Companies

Many companies have intelligent AI-powered search capabilities that surface information based on role, people you work with, location, etc., but “most organizations haven't purchased a GenAI license,” observed Dzneladze. “So, while workers may be using external GenAI (ChatGPT) for work purposes, most organizations haven't purchased GenAI at scale.” 

Robinson said at the enterprise level, the most commonly adopted AI-driven capabilities include customer service automation, coding co-pilots for engineering teams and operational workflow automations. 

“We anticipate advancements in automation agents for workflow tasks, deeper technical operations and more complex operations such as risk assessment, data analysis and system design. These emerging capabilities will enhance the efficiency and effectiveness of enterprise operations.”

The full automation of tasks and processes is also on the horizon, along with photo/video creation, audio translation in the speaker's voice and mimicking people's writing styles. ClearBox’s AI Trends in Employee Experience in 2024 showed that many employee experience platforms already include AI features. The adoption of many of these is still fairly low, but there are capabilities commonly at play with IC teams, Robinson said. They’re using GenAI to reduce word count or provide a starting point for an article — in other words, as a support rather than a replacement. 

“Around half the products I've assessed include generative AI in this form,” she said. “Some products include generative image capabilities, which are helpful for a quick time saver but require attention to detail.”

A handful of AI-powered scheduling and publishing management apps can also assist with personalization and audience targeting, surfacing content for the right people at the right time and improving EX in the process.

Intuitively, we know that getting there from here is inevitable, but the path forward can feel frightening. There’s a fine line between personalization and privacy, and in our race to create all of the best things first, we’re increasingly sacrificing confidentiality in new ways. Not only that, we’re also relying on data that has been proven to be less than trustworthy.

Related Article: Crafting Better Executive Communications With AI

Blurring Barriers at Breakneck Speed

Data protection regulations seem to barely keep pace with evolving technology. Data breaches, phishing attempts, bad actors and a host of other data security challenges are in play, a list that seems to expand daily. Companies also have overly eager early adopters to worry about.

In April of last year, Samsung workers made a major error uploading confidential data to ChatGPT, including the source code for a new program, internal meeting notes and other data relating to their hardware. This meant it was now part of OpenAI’s learning materials and could be used to further train and refine ChatGPT.

As a result, “many software vendors now provide AI capabilities in a ring-fenced manner, claiming that it's entirely secure and data won't leave a client's environment,” said Robinson. But that doesn’t mean it won’t hallucinate, which is a recurring concern for any AI-generated content. “So, if it gets confused or accesses a document that's irrelevant for the employee, then the result could seem authoritative yet be wrong.”

Data and security issues have long existed with knowledge bases in general. Individuals inaccurately store their data, allowing everyone in the company to view documents that should be undiscoverable. But GAI has additional security concerns beyond this, and beyond its hallucinations and inherent biases, shared Dzneladze: “Prompt injection and data poisoning,” for example. “People creating bad content on purpose to retrain the model or asking questions to make the GenAI act maliciously.”

Leaks and other complex cybersecurity threats also exist. Robinson cautions companies to be proactive, to manage and detect AI models' weaknesses and vulnerabilities and to implement defenses against these increasingly prevalent threats. 

Learning Opportunities

Dzneladze doesn’t see internal communications prioritizing security in response to AI, though — not yet, at least. 

“Most organizations haven't enabled GenAI, so it's not quite a necessity,” she said. “They should be focused on ensuring their content is stored and tagged in a way that GenAI/AI will be able to find it, and teaching communicators to be stewards of change.”

Staying ahead of evolving threats will challenge most companies, but those approaching AI haphazardly will be the next data disaster in the news.

About the Author
Mary C. Long

For over a decade, Mary has been a ghostwriter and captivating content creator for transformative voices, laying the groundwork for AI and other emerging technologies. Connect with Mary C. Long:

Main image: Philippe Murray-Pietsch | unsplash
Featured Research