We’re entering a time where machines can act on our behalf — AI agents that can browse the web, click links, make decisions on their own. The technology shines a light on so many potential opportunities… and a whole lot of complexities.
As organizations race to adopt agentic AI tools, a growing chorus of voices calls for a bigger conversation: What does alignment really mean when AI starts behaving like an agent, not just a tool? Is it enough to tell these systems what to do, or do we need to teach them how to care, cooperate and understand?
Four influential thinkers — Geoffrey Hinton, Emmett Shear, Catalina Herrera and Alan Ranger — are reenvisioning how we answer that question. And while their perspectives vary, they converge in one meaningful way: alignment isn’t just a technical problem. It’s a human one.
Rethinking Human-Agent Relationships
For decades, AI alignment focused on control — building systems with rigid boundaries to prevent harmful outcomes. But as AI agents develop the ability to form sub-goals, make autonomous decisions and operate in open-ended environments, these restrictive approaches face some big limitations.
Geoffrey Hinton, the "Godfather of AI," argued this control-based framework is fundamentally flawed for future AI systems that will surpass human intelligence. "We need AI mothers rather than AI assistants," he explained. "A mother has all sorts of built-in instincts… to really care about the baby. That's what we should be working towards."
This idea, that alignment should emerge from relationship rather than restriction, is echoed by Emmett Shear, former OpenAI CEO and co-founder of Softmax. Instead of forcing systems to follow human-defined rules, Shear proposed teaching AI to want the same things we want, through shared experience and social learning.
“Alignment can be seen as a shared capability instead of a system of control…,” he explained. “It has to be a two-way flow. It has to care about us, and we have to care about it.”
Related Article: ‘Mother AI’ Could Be Humanity’s Last Hope, Says Godfather of AI
Why AI Still Needs a Human Conductor
Even as AI systems grow more capable, human insight is not going out of style. Instead, it may be more important than ever.
Catalina Herrera, field CDO at Dataiku, likened AI to a symphony of data, models and capabilities — but one that still requires a human conductor. AI can generate outcomes, she said, but it’s up to us to shape them into something useful and aligned with real-world needs. “Behind every outcome there is a human touch and a whole lot of creativity.”
To illustrate the point, she shared an example from her own experience: it took her 49 carefully crafted prompts to generate a 47-second AI-generated reggaeton track. The takeaway? Even in an age of automation, craft still matters. It’s not just about using the tools, it’s about guiding them with creativity, context and vision.
Alan Ranger, VP of marketing at Cognigy, reiterated this sentiment from the enterprise side. His team uses AI to assist with everything from proposal writing to brand language analysis, but none of it runs unchecked. Final decisions still rest with humans.
That "check" — that human-in-the-loop review — is where judgment comes in. Whether it's refining a prompt or weighing the ethical implications of a recommendation, human leadership and oversight remain essential.
AI Needs More Than Data — It Needs Human Stories
More than just editors or supervisors, humans are the meaning-makers in this new system. As Shear put it, AI models — like humans — learn from stories. The narratives we use to explain our decisions, our values and our intentions help these systems interpret not just what we want, but why we want it.
“The stories we’re all telling ourselves, we’re telling each other about how we relate to these AIs — those are going to determine how these language models… decide: are they part of us or not?”
In other words, AI may be trained on data, but it aligns through context. And that context comes from us.
AI Governance Starts with Shared Responsibility
As agentic AI systems grow in power, they introduce institutional risk. From biased outputs to rogue automation, the stakes are too high to leave governance to a single team or department. AI governance must be proactive and built into the foundation.
Ranger offered a clear example of how enterprises are adapting. “Pretty much every large enterprise we’re dealing with now has an AI council… from the contact center to compliance, to security, to legal, to even the ethics teams.”
These cross-functional teams act as internal checkpoints, ensuring that AI deployments align with company policy, data protection requirements and ethical standards, all before anything goes live.
AI Safety Depends on Built-In Governance Structures
Herrera takes it further, calling for embedded governance frameworks that support scale and safety from day one. She frames it as a core requirement, not a nice-to-have.
“You need visibility, you need traceability. You need to ensure that whatever you are producing is part of a framework that is going to allow you to do this with control, you are the creator of the symphony, so own that place, own that position.”
She added that governance isn’t just about preventing harm — it’s also about creating a structure where agents can be deployed effectively and responsibly, with clear metrics, accountability and human-in-the-loop systems.
Whether the arena is enterprise or government policy, the lesson is the same: alignment is not the job of AI engineers alone. It’s a shared responsibility that touches compliance, operations, security, leadership — and ultimately, society.
Related Article: Do's, Don'ts and Must-Haves for Agentic AI
The Future of AI Is a Team Effort
As AI systems take on more autonomy and influence, the question is no longer just how smart they are — it’s how aligned they are. And as these four voices suggest, alignment isn’t something we can bolt on after the fact. It’s something we must build into the foundation with intention.
AI is moving into the realm of teammate, not just tool, and like any good team, alignment requires shared goals, mutual understanding and accountability on both sides.
The future of agentic AI will be shaped not only by what we build, but by how we choose to live with what we’ve built. That’s a challenge that can’t be delegated.