metal guardrails on a highway
News Analysis

How SAS Is Developing Responsible Agentic AI

6 minute read
David Barry avatar
By
SAVED
It's not enough for AI agents to be autonomous. They also have to be accountable. SAS's Viya platform is trying to make that happen.

As artificial intelligence (AI) continues its rapid evolution, a shift is underway from passive assistants to autonomous agents capable of independent decision-making and execution. This next stage, agentic AI, promises unprecedented efficiency and scalability, but it also introduces pressing questions around ethics, governance and control.

SAS, more generally associated with analytics, is stepping into this space. Its latest announcement, the agentic AI framework embedded within its SAS Viya platform, is intended to turn AI agents into trusted business differentiators rather than experimental novelties. Central to this framework are the new SAS Intelligent Decisioning capabilities, which help organizations design, deploy and scale AI agents while maintaining a balance between autonomy and human oversight.

Building Autonomous AI Agents With Guardrails

Tech executives are charging ahead on agentic AI with 48% already adopting or fully deploying autonomous agents in some capacity, according to a recent EY survey. Even more telling, half of all tech leaders said they expected more than 50% of AI deployments within their companies to be autonomous in the next 24 months. These stats reveal not just ambition, but urgency.

The survey also highlights optimism across the industry. A striking 81% of respondents said they believed AI will help them achieve their goals within the next year. That optimism is fueling a hiring wave: 84% of tech leaders said they expected to add AI-skilled talent in the next six months, while 70% are focused on training current employees.

This surge in AI confidence underscores the importance of doing it right. Autonomy must be paired with accountability, said Shaila Rana, IEEE senior member and professor of cybersecurity at Purdue University Global.

“Routine, data-driven tasks with minimal consequences are prime candidates for higher automation,” Rana said. “But strategic decisions with significant impact require stronger human involvement.”

“Enterprises need to plan for oversight from the beginning,” agreed David Menninger, executive director of software research at ISG. “We've seen projects delayed by six months or more because governance wasn’t built in from the start.”

To address this, SAS has embedded decision governance into the Viya platform, to help organizations approve, audit and trace AI agent decisions. This built-in structure helps organizations strike the balance between innovation and compliance, moving fast, but safely.

Responsible AI and Transparent Architecture

SAS’s agentic AI framework is explainable as well as powerful. The architecture gives machine learning models rules-based logic and workflow automation so AI decisions are both traceable and understandable.

“SAS Intelligent Decisioning has been around for years,” said Menninger. “It provides a robust decisioning mechanism with approval workflows. These workflows offer explainability by design.”

The platform's multi-layered approach lets organizations tailor explanations to different audiences. Non-technical stakeholders get high-level rationales, while specialists can dive into detailed model logic. This contextual transparency makes it easier for groups to work together and helps bridge the gap between data scientists and business leaders.

Human-in-the-loop elements are also core to SAS’s approach. Rather than replacing employees, AI agents are intended to complement them by executing tasks while leaving room for human judgment when needed.

“The most effective approach combines regular performance audits with feedback loops,” Rana said.  “That way, organizations can continuously improve the balance between automation and oversight.”

Furthermore, explainability isn’t an afterthought. SAS includes built-in tools for documenting AI logic, monitoring outputs and flagging anomalies, making it easier to meet both internal audit requirements and external regulatory standards.

SAS’s AI Integration Strategy

For agentic AI to succeed, it can’t operate in isolation. It needs to integrate into the tools, workflows and systems businesses already rely on. 

SAS Viya is designed with modular integration in mind. AI agents can be embedded into decision environments such as CRM systems, ERPs, call centers, supply chain software and more. By analyzing real-time data and automating responses within these systems, SAS agents support using agentic AI within workflows. 

One feature is Viya’s graduated automation. Organizations can start with decision support tools, where the agent makes recommendations, and gradually evolve to full decision automation once staff feel comfortable with the process and have established controls. 

“Think of using AI agents like hiring a new employee,” said Menninger. “You wouldn’t let them make mission-critical decisions without training and oversight. AI agents need the same kind of phased integration.”

SAS’s workflow orchestration lets businesses design custom approval hierarchies, define task-level autonomy settings and track decision lineage over time. This level of fine-grained control is important in sectors with stricter regulatory requirements, such as finance, healthcare and government. 

Cultural integration is another important aspect, Rama said. “Organizations should treat AI as a collaborator, not a replacement,” she said. “This encourages employees to engage with the technology, not fear it.”

SAS supports this through role-based user interfaces, so employees across departments can interact with AI agents in ways that suit their expertise and responsibilities. This human-centered design builds trust and makes employees more likely to use it.

Mitigating Agentic AI Bias and Risk 

As AI agents take on more responsibility, monitoring for bias and managing risk becomes more important.  SAS Viya includes tools for:

  • Bias detection in training data.
  • Toxicity scoring of outputs.
  • Stress testing agents under edge-case scenarios.
  • Custom alerts for anomalous decisions.

Rana advocates for a lifecycle approach to bias management: “You need diverse and representative training data, plus regular independent audits by cross-functional teams,” she said. “And once an issue is found, organizations must have a plan in place to roll back agents or increase oversight.”

Organizations shouldn’t expect software vendors to solve everything out of the box, Menninger said. “Only a small percentage of AI platforms have mature governance capabilities,” he noted. “Most will need to be supplemented by internal processes and human review mechanisms.”

Using Visibility to Mitigate AI Bias

Bias and risk mitigation begins with visibility because users can't manage what they can't see, said Ben Kliger, CEO of Zenity. To deploy AI agents in the workplace effectively, organizations need to understand not just where these agents operate, but how they function and evolve. To stay ahead of potential risks, Kliger suggested: 

  1. Gain visibility into all AI agents: Map out which agents are in use, where they’re deployed and what roles they’re performing. A comprehensive inventory lays the foundation for responsible oversight.
  2. Understand their design and purpose: Context is everything. An AI agent intended to support HR decisions should behave differently from one handling financial operations. Knowing their intended function helps set the right guardrails.
  3. Observe behavior continuously: Ongoing monitoring helps spot unexpected actions or reasoning patterns early. This proactive stance helps identify vulnerabilities before they lead to harm.
  4. Detect and respond to emergent threats: Real-time detection of anomalies — such as unauthorized data access or deviations from predefined goals — is critical. Rapid response mechanisms help protect both data and operations.
Learning Opportunities

Integrating AI agents into existing workflows shouldn't mean giving up control, Kliger said. The key is to model trust in AI agents the same way we do with human colleagues. When done thoughtfully, AI makes people more productive while preserving transparency and accountability. Organizations should embed security from the start, define clear behavioral boundaries through trust modelling and ensure transparency in decision-making, he said. Maintaining human oversight retains control and accountability.

Collaboration, Not Replacement

Ultimately, the goal of agentic AI isn’t to replace people, but to complement them. When integrated thoughtfully, AI agents free employees to focus on creative, strategic and interpersonal tasks while offloading repetitive, data-driven decisions to machines. 

This vision is already influencing hiring and workforce planning. According to the EY survey:

  • 70% of tech companies are investing in training.
  • 68% are hiring new AI-skilled talent.
  • A majority of organizations see AI as a collaborative tool, not a threat.

SAS’s agentic AI framework fits with this by supporting customizable autonomy, embedded governance and human-first integration strategies. It encourages experimentation within guardrails, helping companies explore AI’s potential without giving up control.

“Leadership’s role is to set clear boundaries,” said Menninger. “Once those are in place, AI can scale responsibly and productively across the enterprise.”

Responsible Agentic AI Is Already Here

Agentic AI is not science fiction — it’s already in production. But for long-term value, organizations must prioritize governance, transparency and collaboration from the start.

SAS’s agentic AI capabilities in the Viya platform demonstrate how to responsibly scale AI agents. By including governance into the decision architecture, providing explainability at all levels and integrating with existing tools,  SAS hopes to show that it’s possible to deploy AI that is accountable as well as autonomous.

Editor's Note: Read more on responsible AI and human oversight: 

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research