Artificial intelligence has vaulted from a lab experiment to a line-of-business enabler. Netwrix's 2025 global survey of security professionals finds that 60% of organizations already run AI in production and nearly one-third have rebuilt parts of their defenses to counter AI-enabled attacks.
60% of organizations already run AI in production
Those threats span every layer of the stack. Deep-fake audio and video make social-engineering calls sound convincing. Neural networks trained on leaked credentials shorten password-cracking cycles from days to minutes. Adaptive malware rewrites its own code on each execution to dodge signature-based defenses. And when attackers cannot breach a company’s running AI model, they target its lifeblood — training data, machine-to-machine (M2M) tokens and proprietary prompts — because exfiltrating a well-tuned model can be as damaging as stealing source code.
This article explores how AI is already expanding the list of security must-haves for organizations, how to ensure the secure use of AI in IT systems and which areas of the security posture should be strengthened to defend against AI-based threats.
The Role of AI in Identity-Based Threats
Credentials are now one of the easiest ways into most enterprises. Verizon’s 2025 Data Breach Investigations Report shows that credential abuse is present in 22% of all breaches and powers 88% percent of basic-web-application attacks; brute-force attempts have more than tripled year-over-year.
Credential abuse powers 88% of basic web-application attacks
Netwrix’s survey paints the same picture: Security incidents are increasingly identity-driven, with attackers bypassing multifactor authentication (MFA), hijacking service accounts and forging synthetic machine identities at scale.
Rapid adoption of AI within organizations is exacerbating this problem. Modern AI stacks depend on vast constellations of robotic process automations, API keys and model-to-model calls. As a result, non-human identities already outnumber employees in many organizations, expanding the attack surface area.
Related Article: Enterprise Security 2.0: How AI Is Changing the Game
How AI Threats Are Changing Regulations and Requirements
Regulators have responded quickly to the security challenges inherent in AI adoption by organizations. The European Union’s AI Act begins banning unacceptable-risk systems and layering transparency and risk-management duties in 2026. Meanwhile, the US NIST AI Risk Management Framework has become the de facto playbook for cataloguing model assets, assessing impact and demonstrating control maturity.
Similarly, risk owners are revising their security standards in response to the growing use of AI. For example, boards, cyber insurers and audit committees increasingly demand proof that AI workloads follow governance regimes at least as stringent as those for payment data or personally identifiable information (PII).
4 Design Mandates for AI Use
Against this backdrop, forward-leaning security programs are converging on four complementary design ideas:
- Zero Trust for AI treats every interaction, human or machine, as untrusted. Service tokens are short-lived, context-aware and scoped to one model or dataset; GPU clusters sit behind micro-segmented controls; and real-time behavioral analytics watch for impossible travel and anomalous requests.
- Lifecycle security extends beyond production to the entire machine learning operations (MLOps) pipeline. Training data is hashed and signed; models are stored in tamper-evident registries; supply-chain scanning covers container images and Python wheels; watermarking or fingerprinting enables rapid take-down if a stolen model shows up in the wild.
- Secure LLM application design embeds guardrails at both ends of the conversation: Inputs pass through prompt-validation filters that strip system-level instructions or malicious tags, while outputs are checked for policy violations and sensitive-data leakage. Retrieval-augmented generation (RAG) workflows keep proprietary context outside the model, reducing blast radius if the model is compromised.
- Continuous assurance closes the feedback loop. Model telemetry (such as prompts, parameters, token counts and confidence scores) flows into security information and management (SIEM) tools alongside traditional logs. Detections are mapped to the MITRE ATLAS adversary matrix to track emerging threats. Purple-team exercises regularly simulate prompt injection, model stealing and data poisoning attacks. The results of these exercises inform rule tuning and drive regular updates to controls and detection logic.
Taken together, these best practices align identity protection with rigorous governance and a security architecture that can evolve as fast as both AI innovation and attacker tradecraft.
From Strategy to Execution: a Checklist of Security Priorities
Before rolling out new controls, it helps to know where the greatest gaps lie. Organizations can use the following practices as a structured improvement roadmap to address identity, data and application risk without stalling innovation:
- Inventory and classify AI assets. Build and maintain a living catalogue of every model, training dataset, pipeline component and third-party dependency. Tag each item with business criticality, regulatory exposure and owner. This detailed inventory facilitates a wide range of processes, from threat modeling to patch tracking to incident scoping.
- Protect training data like crown-jewel IP. Encrypt datasets at rest and in transit, enforce tight role-based access control (RBAC) and capture forensic-grade logs of every read, write or export. Where possible, isolate sensitive data in dedicated training environments, and scrub or synthesize personal information before it ever reaches a model.
- Micro-segment infrastructure and enforce least privilege. Place GPU clusters, model endpoints and data stores behind granular access controls. Segment AI services from general compute infrastructure and implement Zero Trust policies that require continuous verification for each interaction, including machine-to-machine traffic.
- Embed guardrails in LLM applications. Validate and sanitize user prompts, strip or escape potentially dangerous instructions and rate-limit queries by user, IP and tenant. On the output side, scan responses for sensitive data leakage or unauthorized content, and refuse or redact any result that violates policy.
- Implement AI-aware monitoring. Extend SIEM and XDR pipelines to ingest model-specific telemetry, including prompts, parameters, response, latency and confidence scores. Use this data to baseline normal usage patterns. Feed anomalies into existing alert queues so analysts see AI events in context rather than chasing a separate dashboard.
- Run AI-focused red/blue-team exercises. Task red teams with prompt-injection jailbreaks, model-stealing attacks and data-poisoning campaigns; have blue teams detect and respond. Capture findings in structured post-mortems, and convert them into playbooks, detection rules and architectural fixes.
- Continuously test and update AI security controls. Treat AI defenses as living systems. Regularly re-tune behavioral baselines, rotate credentials and revalidate guardrails in light of new threat intelligence or model updates.
Related Article: The Enterprise Playbook for LLM Red Teaming
Measuring Progress and Sustaining Momentum
Classic indicators such as mean time to detect and respond still matter, but they should be enriched with AI-specific metrics. Examples include the percentage of models with completed threat models, the quarterly closure rate of AI red-team findings and the proportion of SOC alerts that provide explainable context. Tracking these numbers reveals whether new controls are translating into real risk reduction.
By weaving AI-specific safeguards into identity, data and application security layers, organizations can begin to move beyond a purely reactive posture. Treating governance as a first-class requirement helps turn today’s defensive scramble into a durable advantage, enabling teams to harness the promise of AI rather than surrendering its benefits to adversaries.
Learn how you can join our contributor community.