A glitched computer screen
Feature

AI Cyber Threats Are Escalating. Most Companies Are Still Unprepared

4 minute read
Christina X. Wood avatar
By
SAVED
AI is fueling smarter, faster cyberattacks, and most security teams are outmatched. Here's why they’re behind and what it’ll take to catch up.

Research from the Global Cybersecurity Alliance shows that an overwhelming majority of cybersecurity teams are not prepared to defend their organizations' systems against AI-powered cyber threats. This is a red alert situation. The majority of IT security leaders (74%) believe their organization is already under attack from AI-powered cyber threats and a larger percentage (89%) believe this type of attack will be a serious concern by 2026.

Keeping up with cybersecurity is a challenge. But what happened here? What is it about AI-powered cyber threats that is catching so many companies off guard? What is keeping them from getting ready? What will happen if they fail to get it together? And how can they get security in place fast?

The Blind Spot in Traditional Cyber Defenses

Historically, cybersecurity threats fell into two main categories, according to Kris Jackson, director of security engineering and operations at BOK Financial

  1. Broad, automated attacks — often nicknamed “spray and pray” — that exploited common vulnerabilities indiscriminately
  2. Highly targeted attacks typically launched by advanced persistent threats (APTs) or nation-states

“Generative AI introduces a new class of threat that blends automation with adaptability,” he said. “Unlike traditional automated attacks, AI-powered threats can learn from defenses, exploit novel vulnerabilities and mimic legitimate behavior, all at a fraction of the cost and effort previously required.”

Defending against more traditional attacks is the modus operandi of most security teams, who have systems and responses to recognize when these attacks happen. The attacks that are waged with AI assistance, however, are more difficult to spot, especially with the methods security teams have in place.

“For instance,” said Jackson, “AI can craft personalized phishing emails or dynamically adapt malware to bypass signature-based detection, enabling less skilled attackers to launch sophisticated, scalable campaigns. This combination of accessibility and evolution sets AI-driven threats apart from conventional cybersecurity concerns.”

Related Article: AI Risks Grow as Companies Prioritize Speed Over Safety

AI Makes Cybercrime Easier, Faster, Smarter

One reason these attacks are so dangerous and increasing so quickly is that AI empowers bad actors that don’t have the hacking skills that were once required to commit sophisticated e-crimes.

“One of the greatest threats GenAI poses is its ability to empower non-technical threat actors at scale,” said Keith Palumbo, CEO and co-founder of Auguria. “Cybercriminals lacking technical skills can now extend their capabilities for high-level scams and attacks. They’ve already begun to weaponize AI to create more sophisticated cyber threats beyond their abilities, including advanced phishing campaigns, AI-powered malware and increasingly complex ransomware attacks.”

Even though the AI-powered attacks are created by people with lesser skills, they are so sophisticated that it is difficult to detect them with traditional methods.

“The ability of artificial intelligence to rapidly ingest, process and utilize data gives the cyber threats it can produce a high degree of flexibility,” explained Seth Geftic, VP of product marketing at Huntress. “A traditional malware threat may have a fixed signature, while an AI-produced threat may be able to shift and morph through different patterns, making detection much more difficult.”

This situation is likely to get worse as more criminals unlock the power of AI.

“For the foreseeable future, it is fair to assume that criminals will continue leveraging AI, finding more ways of accessing and stealing business information and organizations will inevitably seek new tools to prevent these attacks,” predicted Palumbo.

The Real Reasons Security Teams Are Struggling

Organizations aren’t prepared for this type of AI disruption because scale is extremely difficult to account for, explained Geftic. “It’s not that organizations don’t have the technology to protect themselves, but rather that this new AI-driven scale creates a level of pressure on defenses that hasn’t previously been seen.”

Many organizations have been caught unawares. Or, rather, they have not had the time or budget they need to reskill and regroup for this new kind of attack. The situation is not helped by the persistent and significant shortage of professionals who have cybersecurity skills, especially those needed to defend systems against AI-powered threats. “Developing and implementing AI-powered security solutions can be expensive, so many organizations may be dealing with limited resources to fully prepare for these kinds of cyberthreats,” said Akash Mahajan, founder and CEO at Kloudle.

Defending against this threat also requires a new way of thinking about security. Organizations must shift from traditional, signature-based defenses to behavioral-based approaches, said Jackson.

“Signature-based tools rely on recognizing known attack patterns, but AI-driven threats evolve too rapidly for these methods to remain effective," he explained. "Behavioral defenses, powered by machine learning and AI, detect anomalies in user or system activity — such as unusual login patterns — but require new technologies, expertise and processes that many organizations lack. The rapid advancement of AI further outpaces the ability of most security teams to update infrastructure or train staff, leaving gaps that attackers can exploit.”

The High Cost of Unchecked AI Threats

The technical consequences of failing to protect against this increased threat are significant.

“Failure to prepare leaves organizations vulnerable to more frequent and effective attacks, resulting in severe consequences like data breaches, service disruptions and financial losses,” said Jackson. “The reduced cost and complexity of AI-powered attacks lower the barrier to entry for cybercriminals, driving a surge in both the number of attackers and the potency of their methods."

For example, he added, previously overlooked vulnerabilities, like minor misconfigurations, can now be exploited at scale. In a financial services context, this could mean stolen customer data, regulatory penalties, operational downtime and eroded trust, amplifying both immediate and long-term damage.

History shows us that losing customer data costs companies more than money. Customers lose faith in the business and become more cautious about where to shop, bank and store information. Once lost, trust is very difficult to regain.

Related Article: Enterprise Security 2.0: How AI Is Changing the Game

Building Smarter Cybersecurity With AI

Just as AI has empowered criminals, it can empower cybersecurity teams attempting to protect their companies.

“When organizations take a considered approach to AI, defenders can be supercharged,” said Palumbo. “We have already seen how applications of AI can reduce the time required for security teams to contain a breach. Defenders must look for every way to reduce the attacker's early-mover advantage, including leveraging AI agents, AI-powered detection and remediation and other cutting-edge defensive and security centric technologies.”

AI-driven tools can respond to these AI-driven threats with behavioral analytics and automated threat hunting that can detect and respond to anomalies in real-time. But people will have to skill up to manage this AI threat protection system, which means companies will need to commit to training their security teams, as well as others in the organization. “Educate staff to recognize AI-generated threats, such as deepfake-based social engineering or sophisticated phishing attempts,” said Jackson.

Learning Opportunities

“In the future,” suggested Palumbo, “a Security Operations Center will be a hybrid of human analysts and AI, teamed together to face a similar attacker pairing.”

About the Author
Christina X. Wood

Christina X. Wood is a working writer and novelist. She has been covering technology since before Bill met Melinda and you met Google. Wood wrote the Family Tech column in Family Circle magazine, the Deal Seeker column at Yahoo! Tech, Implications for PC Magazine and Consumer Watch for PC World. She writes about technology, education, parenting and many other topics. She holds a B.A. in English from the University of California, Berkeley. Connect with Christina X. Wood:

Main image: JLJ/peopleimages.com on Adobe Stock
Featured Research