A robot with a loading screen on its face
Feature

What Enterprise AI Experts Say Comes Before the Breakthrough

7 minute read
Michelle Hawley avatar
By
SAVED
Enterprise AI has a hype problem. Experts say the real work starts before the model.

Key Takeaways

  • Cleaning messy records can unlock faster, more reliable analysis.
  • Inconsistent definitions, formats and systems make AI outputs harder to trust.
  • Autonomy only works when teams can monitor, test and explain what AI does.
  • Without governance, leadership buy-in and clear use cases, experiments stay stuck.

I’m still hearing all the buzz around technology that promises to compress years of work into mere minutes, or organizations that can mimic hundreds of workers with just one human behind the helm.

The reality is a lot less cinematic.

It’s cleaning data across millions of patient records. Standardizing restaurant transaction data before anyone asks a model for help. Organizing donor databases so AI can identify future high-value supporters. Building software layers that let data scientists experiment with quantum before the hardware is fully mature.

In other words: before AI can transform the enterprise, the enterprise has to become usable by AI.

That was the common thread across conversations with analytics and AI leaders from Gilead Sciences, SAS, Boddie-Noell and The Nature Conservancy. Their industries couldn't be more different, but their messages were strikingly the same.

Table of Contents

The Best AI Use Cases Are Not Always the Flashiest

At Gilead Sciences, Dr. Alex Asiimwe, executive director of epidemiology, is not using AI to replace scientific judgment or generate regulatory evidence out of thin air. His team is applying it to one of the most time-consuming parts of real-world evidence generation: cleaning messy medical data.

Gilead Sciences headquarters
Sundry Photography | Adobe Stock

“When we are doing analysis for observational studies using these electronic health records and claims, most of the time is spent cleaning the data, really,” Asiimwe said.

That may not sound like the AI glam executives were sold. But in pharma, where studies rely on massive volumes of claims, electronic health records and lab results, shaving time off data preparation can have enormous downstream value.

Lab data is a good example. Even when patient records are digitized, results may appear in inconsistent formats or units. For humans, reconciling that across hundreds of millions of records is slow and painful. For a well-trained AI system, it can become a scalable assistive function.

The key word being assistive. Asiimwe was clear that Gilead is not using AI for every possible analytics function, especially where regulators may not yet be ready to accept AI-generated analysis.

“I am a believer that we should not use AI for the sake of using AI,” he said.

That principle came up again and again: AI works best when aimed at a business problem that already matters.

The Common Thread: Solve the Existing Problem

At Boddie-Noell, the largest Hardee’s franchisee, analytics helped settle a practical operating question: Should restaurants extend breakfast hours? Traditional reporting buried the signal. More granular analytics showed that from 10:30 a.m. to noon, breakfast sales were growing at a 30% to 40% daily rate in that window, without requiring additional payroll.

That insight helped push extended breakfast across Hardee’s.

Hardees in Pulaski
Idawriter | Wikimedia Commons

David Gardner, senior director of analytics at Boddie-Noell, said the lesson was not that data replaces operators. It’s that data can catch what old reports and gut feel miss.

“I got to figure out how we can marry your gut with real world data,” he said.

Related Article: The Architecture of the Agentic Enterprise: Semantics, Governance & Safe Autonomy

AI Can't Fix Your Messy Data Problem

AI does not magically fix bad data.

Gardner learned that years before the current AI boom. When he joined Boddie-Noell nearly a decade ago, he found inconsistent definitions across data sets. The same measure could appear under different names. Different systems did not line up. Trust suffered.

His team spent nine months cleaning and standardizing data, and that foundation still pays off today.

“If you don't, it's the rule: garbage in, garbage out,” Gardner said.

That discipline now shapes how the company brings in third-party data, customer reviews, DoorDash ratings, Google ratings and operational signals. If a data set does not meet the company’s core rules, it does not go in.

Good AI Depends on the Environment Around It

John Blackwell, director of strategic analytics at The Nature Conservancy, made a similar point.

His team uses AI for code generation, model building, synthetic data exploration and, increasingly, content creation for donor appeals. But those use cases work because the organization spent years improving its data foundation.

“AI is still not great at just giving it sort of like a mess and being like, deal with this,” Blackwell said. “But when you have things in a clearly structured database, it just makes the end result much better.”

Learning Opportunities

That especially matters at The Nature Conservancy, where the organization is not using AI in the abstract. It’s using it to help fundraising teams identify donor opportunities, improve campaign targeting and model rare but valuable events — like finding donors who may start with modest gifts but eventually have the capacity to contribute at a much higher level.

The Nature Conservancy, The Robert W. Wilson Building, November 10, 2021, Arlington, VA
engineerchange | Wikimedia Commons

Synthetic data could help strengthen models around those rare events, Blackwell said. But even then, the model depends on the quality of the underlying environment.

For enterprise AI leaders, that should be a sticking point. Many organizations try to move straight to generative AI, agentic workflows or predictive automation before they have resolved basic data architecture issues.

Agentic AI Is Coming, But Trust Is Not Automatic

Agentic AI came up in all four interviews, and no one dismissed it. But no one treated autonomy as a free pass either.

In pharma, Asiimwe sees room for agentic AI, but only in specific areas and with clear human oversight.

“When you're dealing with patient information, it is a bit tricky,” he said. “You still need a person to really be there all the time seeing what's happening.”

At The Nature Conservancy, Blackwell sees agentic AI as particularly interesting for data preparation. An agent could continuously test transformations, iterate through different modeling approaches and help prepare data for stronger models. But donor privacy creates hard boundaries.

“We have to be extremely careful with that,” Blackwell said.

That caution is not just about compliance. It is about public trust. Blackwell said the organization has to assume someone may eventually write about how it uses donor data, and that donors should not feel their trust has been diminished.

Gardner’s view of agentic AI was similarly pragmatic. He believes autonomous analytics workflows will arrive, but validation cannot be a one-time exercise.

“You have to validate over and over again,” Gardner said. “We can't just trust it.”

The Skills Gap Is Really a Translation Gap

The future of enterprise AI will not belong only to data scientists, physicists or prompt engineers. It will belong to people who can translate between technical systems and business realities.

Amy Stout, head of quantum product strategy at SAS, sees this clearly in quantum computing. Quantum hardware is not yet mature enough for large-scale business problems, but organizations that wait for full maturity may find themselves years behind.

“If organizations wait until that's here, then they're already behind,” she said.

The challenge is that quantum remains intimidating. Historically, the field has been dominated by physicists. But if quantum computing is going to matter for enterprise workflows, it can’t stay trapped inside physics departments.

IBM Quantum scientist Dr. Maika Takita in lab
IBM Quantum scientist Dr. Maika Takita in labIBM

According to Stout, SAS is trying to build a bridge between the quantum and classical worlds so more data scientists and programmers can explore quantum without needing deep physics expertise.

“You don't need to be a physicist to use quantum computers,” she said, "because if that's how the world remains, then there will be way too limited use cases that we actually see come to fruition."

That same translation problem exists in AI. Asiimwe said he prefers retraining internal employees because they already understand how the company works. Gardner said future analytics talent will need to understand prompting, because careless AI use wastes money and resources. Blackwell’s work shows how AI can help analytics teams move faster, but only when they understand the underlying data, business process and ethical context.

The winners will be those who build AI fluency into the people who already know the business.

Related Article: 10 Top AI Certifications for Pros Without Technical Backgrounds

The Pilot Era Needs an Exit Strategy

Enterprise AI is still stuck in pilot purgatory. Asiimwe put it bluntly: “Everyone is doing pilots.”

Pilots are not bad. In regulated industries, they’re necessary. But pilots without governance, leadership alignment or a path to production become theater.

Asiimwe said leadership caution, contracting delays, legal questions, skills gaps and strict governance can all slow AI pilots before they start. He also argued that leaders need more opportunities to compare notes across organizations, especially in pharma, where companies face similar challenges but often work in silos.

That may be one of the biggest disconnects in enterprise AI today. Workers are experimenting. Vendors are building. Technical teams are testing. But leadership often lacks the hands-on understanding needed to set direction, define guardrails and give teams the cover to move.

The Exit From Pilot Purgatory Is Operational

Meanwhile, the most successful examples are not moonshots. They are focused, explainable and operationally useful.

AI cleans lab data. AI helps restaurant managers start the day with clearer priorities. AI accelerates fundraising models. Quantum tools prepare data scientists for a future that is not fully here yet.

None of that sounds like science fiction. That’s why it matters.

Enterprise AI is struggling because the organizational work is bigger than many leaders expected. And the next breakthrough likely won’t come from a model announcement or a fully autonomous agent. It will come from the company that finally cleans its data, validates its outputs, trains its people and knows exactly where AI belongs.

Frequently Asked Questions

An MIT study found that only 5% of generative AI pilots deliver measurable impact on profit and loss 

AI pilots often fail because they are treated as experiments rather than future operating models. Common blockers include poor data quality, unclear ownership, legal or compliance concerns, weak executive alignment and no defined path for moving from testing to production.

The best starting point is usually a repetitive, high-friction business process where better speed or accuracy would create measurable value. Good early use cases often involve data preparation, customer support triage, document review, forecasting, reporting or internal workflow automation.
AI systems depend on the information they are given. If company data is inconsistent, duplicated, incomplete or poorly labeled, AI outputs can become unreliable. Clean data helps models generate more accurate analysis, stronger recommendations and more trustworthy automation.

Employees do not all need to become data scientists. But they do need enough AI fluency to understand where AI fits, how to evaluate outputs, how to write effective prompts and when to escalate risks. The most valuable people may be those who can connect technical tools to business context.

For organizations looking for outside learning resources, plenty of AI certifications and courses are available online.

Trust comes from testing, transparency and repeatable governance. Companies should define how AI outputs are reviewed, who is accountable for decisions, what data the system can access and how performance will be monitored over time.

About the Author
Michelle Hawley

Michelle Hawley is an experienced journalist who specializes in reporting on the impact of technology on society. As editorial director at Simpler Media Group, she oversees the day-to-day operations of VKTR, covering the world of enterprise AI and managing a network of contributing writers. She's also the host of CMSWire's CMO Circle and co-host of CMSWire's CX Decoded. With an MFA in creative writing and background in both news and marketing, she offers unique insights on the topics of tech disruption, corporate responsibility, changing AI legislation and more. She currently resides in Pennsylvania with her husband and two dogs. Connect with Michelle Hawley:

Main image: Simpler Media Group
Featured Research