Shadows of a group of people on a wall
Feature

The Rise of Shadow AI — And What Leaders Should Do About It

6 minute read
David Gordon avatar
By
SAVED
57% of employees admit to hiding AI use at work, exposing companies to major risks. Find out how to govern shadow AI before it undermines your enterprise.

An IT analyst at a large hospital system was juggling backlogged tickets when a co-worker suggested trying Claude to draft faster replies. Within minutes, the analyst was using AI to summarize internal documentation, troubleshoot error messages and write more polished communications to clinicians. It wasn’t part of any sanctioned workflow, but it worked. And no one asked where the extra efficiency came from.

Scenes like this are playing out in finance, retail, government and beyond. The tools are free, fast and not officially approved. This emerging behavior is called shadow AI, and it’s already reshaping how work gets done without leadership even knowing.

According to Microsoft and LinkedIn’s 2024 Work Trend Index, 75% of knowledge workers now use generative AI at work, and 78% are using self-selected tools, commonly dubbed Bring Your Own AI (BYOAI) without organizational guidance.

Table of Contents

Shadow AI Defined: Hidden Productivity or Hidden Risk? 

Shadow AI happens when employees turn to AI tools like ChatGPT, Claude or Perplexity without looping in IT. No approval cycle. No documentation. Just a prompt box hidden behind a spreadsheet.

The pattern is familiar. It mirrors the rise of shadow IT, when workers set up Dropbox accounts to move faster or built Slack channels while waiting for an official messaging tool. This time, the tools generate copy, write code and summarize dense strategy documents. Some employees use them to cut hours off routine tasks. Others are experimenting at the edges of what's possible.

The momentum comes from real-world pressure. People need to move quickly, learn on the fly and solve problems without filing support tickets or waiting for vendor trainings. Official tools often lag behind. AI steps in with speed, flexibility and the kind of frictionless interface that invites use.

No logins. No onboarding. No lag. Just results, or at least the impression of results, which for many is enough.

Related Article: How Are IT Teams Using AI?

The Risk of Shadow AI in the Enterprise  

"AI, especially GenAI, only becomes useful when it has access to your data."

- Ronan Murphy

Chief Data Strategy Officer, Forcepoint

Shadow AI feels like a shortcut. But shortcuts carry baggage. When employees paste client data, source code or internal reports into public AI tools, the organization loses visibility. The content often travels into opaque systems with unclear retention policies and limited auditability. Legal teams can’t verify what was shared. Compliance leads can’t trace where it went. And IT inherits a risk profile built on silence.

The tools themselves introduce new layers of uncertainty. Their outputs often look polished and persuasive. But when those outputs make it into emails, dashboards or production environments, the errors don’t always announce themselves. 

Ronan Murphy, chief data strategy officer at Forcepoint, views Shadow AI not just as a visibility issue, but as a direct data risk. “AI, especially GenAI, only becomes useful when it has access to your data,” he explained. “From a CISO’s perspective, shadow AI isn’t just a visibility issue — it’s a data risk issue because it’s unsanctioned and ungoverned. In this new era, regulators are looking at your data, your enterprise wants value from that data and attackers are actively targeting it.”

A global study by KPMG and the University of Melbourne found that 57% of workers admit to hiding their AI use from employers, and 48% have uploaded sensitive company data into public AI platforms, a clear signal of a governance gap and risky data exposure.

The biggest risk may be the sense of control. Users feel empowered. Managers feel behind. And the whole system keeps moving forward without a record of how decisions were made.

The Wrong Way to Address Shadow AI

Blocking access to generative AI might feel like the responsible move. Security teams can blacklist domains, IT can shut down tools at the firewall and leadership can check the compliance box that says, “we’ve addressed the issue.” But the behavior doesn’t stop. It just moves.

Employees Find Workarounds 

Employees who find value in these tools will work around the controls. Some will use personal devices or mobile hotspots. Others will pivot to lesser-known tools that haven’t made the banned list yet. The ecosystem of generative AI is expanding too fast, and centralized IT policies can’t chase every variant in real time.

Organizations Face Cultural Costs 

There’s also the cultural cost. Hardline restrictions send a message that experimentation isn’t welcome, even when the goal is efficiency or insight, and it’s a missed opportunity to learn how people are actually solving problems in the flow of work.

Leaders Don't Understand the 'Why'

More importantly, the bans don’t address the underlying reason these tools gained traction in the first place. They don’t offer better alternatives, don’t guide safe use and don’t solve the speed vs. safety tension that gave rise to shadow AI. They just prohibit it. And in most cases, prohibition doesn’t change behavior. 

“You can’t manage what you don’t understand,” Murphy said, adding that the battle for cybersecurity is now at the data layer. "Securing against shadow AI starts with knowing what data matters, adapting controls dynamically the moment risk changes and protecting that data through automation and unified policy — across every app, endpoint and AI model.”

A Practical Playbook for Managing Shadow AI

Shadow AI marks a shift in how work gets done. It signals a need for new leadership habits, grounded in visibility, clarity and trust. This moment is about guiding how people solve problems when the usual paths feel too slow or too limited.

To respond effectively, leaders can take the following steps:

1. Acknowledge AI Use

Recognize that employees are already using generative AI in their workflows. Starting from this reality builds credibility and creates space for responsible policy.

2. Define What's Safe and What's Not

Instead of blanket restrictions, create guidelines that identify low-risk use cases and clearly flag high-risk scenarios, such as handling regulated data or generating public-facing content.

3. Offer Approved Tools

Provide a short list of vetted AI platforms that meet your organization's security and compliance standards. Make it easy for people to choose the right option.

4. Make Training Accessible & Clear

Equip teams with short, practical resources that explain how generative AI works, what its risks are and how to use it responsibly. Prioritize real-world examples over abstract warnings.

5. Monitor AI Use With Purpose 

Use tools that provide visibility into how AI is used across the organization. Treat this as a source of insight, not punishment.

Learning Opportunities

6. Assign Clear Ownership

Designate a person or group to oversee AI governance. That might be a digital innovation lead, a cross-functional team or a rotating task force. The point is to avoid ambiguity and drift.

Shadow AI will not wait for permission. But it will respond to leadership that listens, learns and provides a safer, smarter path forward.

Turning Shadow AI Into an Enterprise Advantage

Shadow AI reflects how quickly employees adapt, experiment and push tools to meet the demands of real work. These informal workflows are not security holes. They are signals. 

That shift begins with observation. By studying how people already use AI, whether in sales outreach, technical documentation, analytics or customer support. Leaders gain a clearer picture of where gaps exist and where new tools could bring value.

Next comes internal capability. Companies can build safe, sanctioned versions of the tools employees already lean on. That might include training open-source models on company data or implementing retrieval-augmented generation (RAG) to answer complex questions with vetted sources. These tools work best when designed around real tasks.

Learning and development plays a central role. Prompt engineering, model limitations and responsible use should be part of day-to-day training. Skill building must keep pace with experimentation.

Governance, too, should shift from enforcement to enablement. Focus on outcomes: better decision quality, fewer errors, faster turnaround. Track AI performance metrics and use those insights to guide the next round of design.

Murphy pointed out that AI can also be part of the solution: “Using AI in solutions like data security posture management with data detection and response allows you to bring more intelligent visibility and more automated controls to meet this exact challenge. It’s how organizations can embrace AI without losing control of their data — or their compliance posture.”

Related Article: Overwhelmed By AI? How to Make AI Training Practical & Impactful

Ignoring Shadow AI Leaves You Exposed

Shadow AI shows how people respond when systems fall short. It signals adaptation: quiet, frictionless and persistent. A creative detour. A display of resourcefulness. When teams reach for faster, smarter tools, they reveal what matters most and where support can meet them.

So what can leaders do?

  • Ask your teams where and how they’re already using generative tools. Listen without judgment.
  • Identify low-risk, high-impact tasks that can be safely accelerated with AI.
  • Build or approve internal tools that align with how people actually work.
  • Create guidance that is clear, current and easy to find.
  • Don’t make AI literacy optional. Make it part of everyday learning.

The sooner leaders step into this conversation, the more influence they have over how it unfolds.

About the Author
David Gordon

David Gordon investigates executive leadership for a global investment firm, freelances in tech and media and writes long-form stories that ask more questions than they answer. He’s always chasing the narrative that undoes the easy version of the truth. Connect with David Gordon:

Main image: trialartinf | Adobe Stock
Featured Research