DALLAS — Not so deep in the heart of Texas, SAS is celebrating its 50th anniversary at the Gaylord Texan Resort. This year’s Innovate event kicked off with some big questions: Do people still matter? How can we trust AI? How do you balance innovation with human judgement? What comes next?
You know, just some light morning fodder.
Fortunately, SAS leaders had some answers.
Table of Contents
- We Can’t Stop Here
- New Complexities Demand New Capabilities
- The New AI Mandate: We Need Trust
- Agentic AI Moves Trust From the Person to the Process
- The Enterprise AI Stack Is Becoming a Decision Stack
- People Still Matter — Maybe More Than Ever
We Can’t Stop Here
Right now, said Bryan Harris, CTO at SAS, we are in a crisis — one of confidence and human ingenuity. “It’s not a collapse in the belief that AI will matter. In the age of AI, will humans still matter?”
From the very beginning, human ingenuity has been the engine behind every breakthrough that's changed the world,” said Harris.
He pointed to the printing press, which scaled knowledge. Transportation, which scaled movement. The mobile phone, which scaled connectivity. The internet, which scaled information and commerce. Now, we’ve reached AI, which, according to Harris, will scale human observation and decision-making.
“But we can’t afford to stop here,” said Harris. “Why? Because the biggest problems are still in front of us.” Illness, fraud, misinformation, economic turmoil, climate crises — all problems we won’t solve by having small-minded, misguided debates about AI and the role of humans. “We have to think bigger.”
Related Article: SAS Viya Adds AI Copilots & Agent Tools
New Complexities Demand New Capabilities
One big problem we’re all facing right now? Information overload, said Harris.
“We know that organizations are creating exponentially more data every day with no end in sight. This is the information landscape. However it can really be overwhelming for the workforce to consume [and] make sense of this information.”
All of this information leads to an overload that fuels confusion, fatigue and opportunity. So what’s the way forward? “The answer has always been, we empower people with technology....”
One area SAS is investing in to close this information gap is digital twins — living replicas of your business. Digital twins allow organizations to test all of their “what if” questions, like exploring scenarios that are rare or haven’t happened yet, or how to optimize for the best outcomes.
SAS is already testing this out in:
- Manufacturing, where they’ve been able to optimize vehicle sizes and routes
- Medical suppliers, where they’ve mapped every room, machine and workflow in a sterilization facility
- Energy and power sectors, where they can prevent power outages and stop wildfires
- OIl and gas rigs, where they can detect unsafe conditions, keep workers safe and prevent costly repairs
As enterprises continue to navigate disruption, said Harris, digital twins empower you to simulate, understand and move first.
The New AI Mandate: We Need Trust
But simulation alone isn’t enough. The bigger issue — and the one SAS kept coming back to — is trust.
AI can generate, recommend, summarize, classify and act. But in enterprise environments, that’s not the same thing as being reliable. A wrong answer in a brainstorming session is annoying. A wrong answer in fraud detection, clinical trials, financial services or health care operations can become expensive and dangerous.
That’s where Harris drew a line between deterministic and non-deterministic AI workflows.
A deterministic system follows predefined rules. Same input, same output. If a loan applicant meets a certain income, debt-to-income and credit score threshold, the loan is approved every time.
A non-deterministic system, like one powered by a large language model, behaves differently. It can interpret context, adapt to new inputs and produce more flexible answers. But that flexibility also means the same input can produce different outputs.
“Because the large language model is probabilistic with no fixed rules or logic, three different runs can produce three different outputs,” said Harris.
That’s a problem when AI is helping prioritize anti-money laundering alerts or making recommendations inside high-stakes workflows.
“If the team is working on incorrectly prioritized alerts, the bank is exposed to increased risk, and to overcome this, you must add guardrails to ensure accuracy and prevent compound error at scale.”
In other words: agentic AI may be powerful, but enterprise AI can’t simply be autonomous. It has to be accountable.
Agentic AI Moves Trust From the Person to the Process
For individual users, trust in AI often comes down to review. A worker asks a chatbot to help write code or summarize research. The human checks the output, catches errors and course-corrects.
That model doesn’t work as neatly at enterprise scale.
When AI agents start moving across systems, data sources, departments and workflows, trust can’t rest solely on one person reviewing one output. It has to be built into the process itself.
“Now, when you move to the enterprise, agentic AI is all about breaking down data and organizational silos to automate business workflows, therefore trust shifts from the individual to the accuracy and repeatability of the process itself.”
That is a key enterprise AI distinction: can organizations verify those actions, validate the outcomes and prevent mistakes from scaling along with the automation?
SAS’ approach, Harris said, is built around four phases: design, execute, verify and validate.
In the design phase, the human role shifts from simply doing the work to defining the context. Harris described this as context engineering, where domain experts define the functional requirements, test objectives and technical requirements that guide the AI system.
Then AI agents can execute against that context. But before the output becomes usable, it has to pass through verification and validation. Verification confirms that the work meets technical specifications. Validation confirms that it meets business objectives.
The Enterprise AI Stack Is Becoming a Decision Stack
The keynote’s core message was that AI is no longer one thing.
It is not just generative AI. It is not just machine learning. It is not just agents. It is not just synthetic data, computer vision, optimization or governance.
It’s how all of these capabilities come together inside actual business workflows.
SAS is aiming for its Viya platform to be the layer for that convergence — a place where applications, agents and models can operate with governance and industry context built in.
“What makes our approach with [agentic] AI unique? Well, for us, it comes down to intelligent decisioning on SAS Viya,” said Harris.
That phrase — intelligent decisioning — may be the key to SAS’ AI strategy. The company is not trying to win the AI conversation on model size or consumer buzz. It is aiming at the operational layer where enterprises have to make decisions that are accurate, repeatable, explainable and tied to business outcomes.
That may not be as flashy as another chatbot demo. But for the enterprise, it is probably more important.
Related Article: SAS Launches AI Navigator to Govern Enterprise AI
People Still Matter — Maybe More Than Ever
So, will people still matter?
SAS’ answer was yes — but not in a sentimental way. The argument was that AI will become part of the background, the way many other transformative technologies eventually did. What lasts is not the technology itself, but what people do with it.
“Every breakthrough technology follows the same arc. It solves a problem, it reshapes society, and eventually it fades into the background of everyday life,” said Harris.
Today is AI. Tomorrow will be something else.
But the enterprise challenge remains the same: Use the technology to solve real problems, not just create new ones. For SAS, that means AI should scale human observation and decision-making. It should help organizations navigate complexity, test ideas, reduce risk and move with more confidence.
It should not reduce the AI conversation to small debates about whether people still belong in the loop.
If the keynote made one thing clear, it’s that SAS believes the future of enterprise AI won’t be defined by automation alone. It will be defined by trust, governance and whether organizations can use AI to make better decisions when the stakes are highest.