Agentic AI is emerging as the next frontier in autonomous decision-making, exhibiting enhanced autonomy, adaptability and the ability to pursue complex goals with minimal human supervision.
These AI agents can analyze vast amounts of data, optimize workflows and even anticipate needs, offering unprecedented insights and efficiencies. In healthcare, for example, AI systems like IBM Watson Health are already analyzing medical literature and patient data to provide personalized treatment recommendations.
However, as AI's decision-making capabilities grow more sophisticated, a critical question emerges: Can we trust machines to make critical decisions? The paradigm shift raises important considerations about the role of AI in strategic decision-making, the value of human guidance and the ethical implications of autonomous AI systems.
How AI Is – and Should – Be Used in Decision-Making
Discussions of AI making management decisions raise fundamental management issues, SchellingPoint CEO Michael Taylor told Reworked. Leaders and managers make three types of decisions: unilateral, informed and collaborative, he continued. Strategies, policies, complex problem solving, chartering transformation programs and projects, and similar actions are collaborative decisions, where the senior-most leader hashes it out with other key stakeholders to reach a decision.
AI is being used in these cases to gather stakeholders' opinions, analyze them and explain what the group thinks as an input into their decision-making. The analyses are rational and compelling, but also highly misleading on a regular basis.
“This is due to the stakeholder opinion-gathering and analysis techniques designed in the last century, which aimed to observe groups rather than support their decision-making,” he said. “The analyses are attractive, logical and compelling, but AI used this way can create significant invisible misdirection and is rarely accurate.”
That said, human leadership and AI-driven insights can dovetail with one another, Maryam Ashoori, director of product management for IBM's watsonx.ai, explained. Human leaders bring empathy, creativity and critical thinking, while AI-driven insights offer objective, data-driven information. “By combining the strengths of both, organizations can make better decisions and achieve their goals,” she said.
This will only work, however, if AI-driven decisions are transparent, fair and accountable. Organizations need to invest in the right tools and best practices such as model interpretability and explainability to understand an AI system’s decisions and actions, while open source offerings can provide additional transparency and control, Ashoori said.
Lack of Transparency and Empathy Hamper AI
Once it becomes clearer how AI makes decisions, we’ll likely delegate more tasks that were once reserved solely for humans, Marco Montes de Oca of EnFi told Reworked. But for now, large language models still operate as a “black (or gray) box.” The lack of transparency into their internal decision-making processes will make people hesitant to hand over consequential decisions, he said.
Machines also malfunction, which underscores the need for human oversight. “In the foreseeable future, humans will continue to make the final calls in high-stakes scenarios. However, there’s no question that AI will grow more capable and may take on increasingly sophisticated forms of leadership — albeit with humans still retaining ultimate responsibility,” he said.
Remember that while AI can mimic empathy it can never feel it, he said. It can mimic empathy by analyzing tone, facial expressions and other cues, but that’s not the same as genuinely experiencing emotion.
True emotional intelligence understands human context, values and cultural nuances — factors that extend beyond pattern recognition. “While AI will likely become more sophisticated in simulating empathy, it’s still operating on programmed logic rather than genuine emotional understanding. In high-stakes or sensitive leadership roles, human empathy remains essential and difficult to replace with an algorithm,” he said.
The ideal scenario is one where humans bring creativity, ethical reasoning and emotional context, while AI excels in crunching data and recognizing patterns at scale, Montes de Oca concluded. Organizations should let AI do what it was designed to do, notably handle tasks that demand speed, accuracy or large-scale data processing, but keep human leaders in the loop for decisions involving ethics, empathy and strategic vision. "This synergy allows organizations to leverage AI’s strengths without losing the uniquely human qualities that guide ethical and empathetic leadership," he said.
What to Do Before Using AI for Decision Support
Right now, AI works best as a decision-support tool rather than an actual leader, Coralogix VP of AI Liran Hason added. Leadership isn’t just about making the “right” call — it’s about empathy, strategy and people management, much as Montes de Oca raised. "AI isn’t quite there yet. But in the future? Who knows. AI has already surprised us in ways we didn’t expect, so it’s possible," said Hason.
Until that point when we hand workplace decisions to AI, it comes down to three things: AI observability, guardrails and insights. “Companies need to track what AI is doing, flag potential issues and have real-time visibility into how decisions are made. If AI is making choices that affect people’s lives or businesses, there needs to be transparency,” Hason said.
Ultimately, humans should always have a hand in anything AI does. AI observability tools can help create an understanding of what’s happening in real-time, the same way system observability helps engineers track downtime. "The sweet spot is human leadership plus AI-powered insights to make smarter, more informed decisions," Hason concluded.
AI may ultimately need full authority to make real-time decisions without human input in high-risk, specialized environments, such as deep space missions, Montes de Oca noted. This will inevitably spark debate. However, in broader sectors like business and politics, humans will likely remain in control due to accountability concerns and the difficulty of attributing AI-driven mistakes. Yet, as AI becomes more transparent and reliable, the boundary between human and AI-led decisions may blur.
A "people-first" Human-AI Teaming (HAIT) strategy is crucial to ensure AI enhances human intelligence rather than replacing it, Seeq CTO Dustin Johnson stressed.
Transparency, accountability and ethical use of AI in leadership contexts is the way forward, he said. Organizations will likely update their mission statements to include these tenants.
Businesses need to establish a foundational framework to disclose when and how AI is used in decision-making processes, clear guidelines on who is responsible for AI-driven decisions and mechanisms for redress in cases of harm. These guardrails will help ensure AI systems are designed and used in ways that align with these values and prevent biases or discrimination.
“By integrating AI as a collaborative partner, organizations can foster creativity, improve decision-making, and reduce job displacement. Ethical considerations must guide AI's role in leadership to maintain trust and responsibility,” he said.
Editor's Note: Read more about how to navigate human-AI collaboration:
- A Digital Workplace Perspective on Where AI Can Enhance Employee Experience — AI can help build digital workplaces where employees thrive. Success lies in balancing tech innovation with an understanding of employee needs and aspirations.
- The AI Agent Explosion: Unexpected Challenges Just Over the Horizon — The coming wave of AI agents raises both potential and challenges in the technical, business and human domains.
- You Have to Know What AI You Have — The first step in creating a strong AI governance strategy is having a clear understanding of where and how AI is being used in your organization.