Historically, there has been a wall between data professionals and security teams — a divide that has long amplified business risk. Data leaders focused on data quality, accessibility and analytics, while security leaders prioritized defense, compliance and threat mitigation. Several leading data authors have argued that blending data expertise with deep domain knowledge is critical to business success. The same logic applies here: in an AI-driven era, the integration of data and security functions is not just beneficial, it is essential.
To be fair, ISO 2700X and NIST 800-53 have long advocated for a more data-centric approach to security, yet cultural and operational silos persist. This is why, when Sharon Chand, Deloitte’s US cyber defense & resilience leader, suggested we explore the intersection of shadow AI and cybersecurity, I was eager to get her take on data in the age of AI.
Her insights go beyond compliance and technology — they offer guidance that every data and AI leader should consider when navigating today’s evolving data landscape.
Table of Contents
- GenAI vs. Agentic AI: Understanding the Distinct Cybersecurity Risks
- Best Practices for Reducing GenAI and Agentic AI Security Risks
- Shadow AI Risks: Hidden Tools Fueling Cybersecurity Blind Spots
- AI Governance Playbook: What CDOs, CISOs and CIOs Must Do Now
- The Future of AI Risk Management and Cyber Resilience
GenAI vs. Agentic AI: Understanding the Distinct Cybersecurity Risks
"Agentic AI elevates the risk level beyond that which GenAI introduced."
- Sharon Chand
US Cyber Defense & Resilience Leader, Deloitte
While GenAI and agentic AI share a similar foundation, their application — and the risks they introduce — are materially different.
Generative AI Risks
GenAI primarily produces content, insights and recommendations, making it invaluable for business intelligence, but also prone to risk the release of sensitive information. For CDOs managing GenAI in business intelligence, the priority is safeguarding data quality, protecting sensitive assets and ensuring responsible outputs.
Agentic AI Risks
Agentic AI, by contrast, can act autonomously, execute tasks and trigger system changes without direct human intervention. This leap in capability introduces higher-stakes risks. For CIOs overseeing agentic AI initiatives, these risks demand rigorous governance, real-time monitoring and strong guardrails.
Identifying AI Security Risks
In both cases, understanding the distinct risk profiles is essential — because misjudging them can turn AI from a competitive advantage into a liability.
Chand agreed, "Agentic AI elevates the risk level beyond that which GenAI introduced. GenAI can expose organizations to security risks like data exfiltration, prompt injection attacks and the unintentional disclosure of sensitive information. "Since agentic AI has autonomous capabilities, it can introduce additional threats such as automated exploitation of vulnerabilities or propagation of malicious code without human intervention. The main difference is that agentic AI’s autonomy can accelerate the impact and scale of cyber risks compared to GenAI.”
Related Article: AI Risks Grow as Companies Prioritize Speed Over Safety
Best Practices for Reducing GenAI and Agentic AI Security Risks
Managing the risks posed by both GenAI and agentic AI requires more than good intentions — it demands a structured, proactive approach.
Chand emphasized that the fundamentals matter: strong access controls, continuous monitoring and AI-specific threat detection. These measures must be tuned to the unique attack surfaces of AI environments, which differ significantly from traditional IT systems. Importantly, Chand argued that cyber defenses themselves must be AI-enabled — because when it comes to detecting and responding to AI-driven threats, humans simply cannot keep pace.
For this reason, according to Chand, governance and data protection play equally critical roles. By embedding robust governance frameworks, organizations can:
- Establish clear accountability for AI oversight
- Define acceptable use
- Ensure compliance with evolving regulations
Data protection safeguards, from encryption to rigorous access management, help prevent sensitive information from becoming a vulnerability. Equally important, said Chand, is having a well-defined AI roadmap — one that anticipates risk scenarios, outlines incident response protocols and integrates security considerations into every stage of the AI lifecycle.
While training employees to recognize and mitigate AI risks is necessary, it is not sufficient on its own. Guardrails, technical controls and automated oversight systems, Chand claimed, provide the enforcement power that training alone cannot. The most effective approach is layered: combining education with governance, technical defenses and AI-enabled security operations. In this way, leaders can not only reduce their exposure but also turn AI risk management into a competitive differentiator.
Shadow AI Risks: Hidden Tools Fueling Cybersecurity Blind Spots
The rise of shadow AI — employee use of AI tools (like ChatGPT) that the organization doesn't know about — is fueled by both organizational gaps and market forces.
“The rise of shadow AI is driven by a combination of factors, including the speed of sanctioned available AI services within the company, and restrictive policies not in tune with business need,” said Chand. When employees lack clear guidance or feel constrained by outdated or overly strict rules, “they may turn to unvetted solutions to boost productivity.” In other words, shadow AI often emerges not from malice, but from a desire to get work done faster and more effectively.
Compounding the challenge, Chand noted that “as technology vendors increasingly AI enable their SaaS platforms or applications, organizations are often unknowingly adopting AI they are unprepared to manage or control.” Addressing this trend starts with visibility — identifying where shadow AI is already in use — followed by a balanced strategy that both empowers teams with sanctioned AI tools and enforces clear, practical policies.
The goal, Chand argued, is not to stifle innovation but to ensure AI adoption happens in a secure, compliant and business-aligned way.
AI Governance Playbook: What CDOs, CISOs and CIOs Must Do Now
Given the scale and complexity of AI-related risks, CDOs, CISOs and CIOs must act in concert rather than in silos. This means jointly mapping AI use cases across the enterprise, embedding security and data governance into every stage of the AI lifecycle and implementing AI-enabled monitoring to detect and respond to threats.
Equally important is aligning on clear, enforceable policies for AI use — policies that balance innovation with risk mitigation. By collaborating closely, these leaders can ensure that AI adoption is not only powerful and transformative but also secure, compliant and trusted.
“Chief Data Officers (CDOs), Chief Information Security Officers (CISOs) and Chief Information Officers (CIOs) must work together to develop AI-specific security protocols that reduce risk," Chand explained, adding that:
- CDOs are responsible for maintaining data quality and ensuring ethical data use.
- CISOs should deploy security controls, monitor for threats and establish cyber resilience plans tailored to AI environments.
- CIOs play a crucial role with integrating AI governance into the overall IT strategy and maintaining compliance.
"By proactively collaborating," said Chand, "these leaders can help protect the enterprise from AI-related threats and strengthen overall cyber resilience.”
Adding on, Brian Lett, research director at Dresner Advisory Services, said, “Reducing AI-related risk absolutely requires collaboration by a CISO, CDO and CIO. But real problems can arise if executive leadership expects this to happen naturally, smoothly and without conflict. Especially with regard to data, responsibilities that previously weren’t shared easily can overlap and become a source of contention or an office-politics play. That requires both a clear charge to a CISO, CDO and CIO from the CEO, as well as well-defined 'swim lanes' regarding areas of oversight and responsibility.
"However, despite that importance, our latest data show that executive management roles least often express concern about this kind of AI-related organizational policy as potentially hindering their enterprises. That disconnect means many executives may need to change their mindsets and quickly make this a higher priority, or run the risk of ending up with dysfunctional AI policies that put their organizations at a competitive disadvantage.”
Related Article: AI Agents: How CIOs Can Navigate Risks and Seize Opportunities
The Expanding Role of CISOs in Protecting AI and Enterprise Data
According to Chand, "Data is now a primary attack vector, especially as AI systems increasingly rely on large, sensitive datasets.”
This shift makes it imperative for CISOs to treat data not as a back-office concern, but as a frontline security priority. Protecting enterprise data today means more than securing databases — it requires safeguarding the entire AI ecosystem, from training data to deployed models, against threats that could compromise business operations or decision-making.
CISOs "must implement strong data security measures and monitor for anomalies,” said Chand, ensuring that both the data and the AI models it powers remain protected. This involves defending against data poisoning, model theft and manipulation attacks, while embedding AI security into the organization’s broader cyber defense strategy. In doing so, CISOs can position themselves not just as protectors of systems, but as guardians of the trust, reliability and resilience that modern AI-enabled enterprises demand.
The Future of AI Risk Management and Cyber Resilience
In the era of GenAI and agentic AI, business leaders can no longer afford to view data security, AI governance and cyber resilience as separate disciplines. The stakes are too high, and the pace of change is too fast.
As Chand made clear, AI systems amplify both the value and the vulnerability of enterprise data — turning it into a prime target and a potential source of cascading business risk. Whether it’s the stealthy spread of shadow AI, the distinct risk profiles of generative versus agentic capabilities or the evolving role of CISOs as guardians of AI models as well as data, the message is the same: siloed thinking is a liability.
The path forward demands cross-functional leadership, AI-enabled defenses and a willingness to evolve security strategies in step with AI innovation. CIOs, CDOs and CISOs who embrace this challenge — aligning governance, policy and technology — won’t just reduce risk; they’ll create a foundation of trust that enables safe, scalable and strategic AI adoption. In the end, real competitive advantage won’t just come from how well organizations use AI, but how confidently they can do so without compromising security, compliance or business integrity.
Learn how you can join our contributor community.