Every company is excited about AI right now, but excitement doesn’t automatically translate into impact. MIT research shows that the majority of AI pilots never produce measurable results. After spending the past year working closely with teams trying to get value out of AI, I’m convinced the real challenge is not the technology itself, but redesigning work so people and AI can operate together in a sustainable way.
One project inside our organization helped make this clear. We’ve been building and testing an AI-powered conversational interface called Data Agent. It helps teams quickly answer workforce questions — like “how many people work at this site?” or “what’s the turnover rate for this group?” — without meetings, tickets, emails or manual data digging.
The project is still evolving, but the process of building it has given us a practical blueprint for how organizations can move from experimentation to transformation and redefine how work gets done by AI agents.
From Idea to Alignment
Our starting point was simple. When deciding which AI use cases to focus on, we needed to understand where people were spending their time. We mapped every major task across our HR teams, and one theme appeared quickly. Virtually every role in HR, especially analysts, spent hours each week running reports and answering questions. HRBPs and other strategic roles were stuck in manual data activities.
A common pattern emerged. Someone submitted a ticket, an analyst interpreted it, pulled data, asked clarifying questions and delivered an answer, often after multiple iterations. The SLA was two days.
We realized this was an opportunity for agentic AI, reinforced by data showing the size of the problem. That helped align HR, IT and data teams around a shared problem statement and vision. Everyone understood what we were trying to solve and why it mattered.
This is a step I recommend to every organization. Before talking about AI, get clear on where time is going and what outcomes you want to improve.
Related Article: Top 5 Agentic AI Courses & Certifications
From Implementation to Integration
With the problem defined, we built a simple first version. We placed a large language model on top of our data to see whether it could answer common questions through natural language. It worked sometimes, but not reliably, and often only for experts who knew how to prompt the questions. These early tests made one thing clear: context was essential. The model didn’t understand the terms employees used every day (e.g., EIC = Early in Career).
AI needs structure to operate well in a business setting, so we created a context layer that defined key terms, hierarchies, teams and relationships. It helped the model interpret questions and helped employees understand results. If someone asked for “new hires in our sales organization this year,” the layer clarified definitions and produced a transparent answer.
Next came prioritization. Instead of supporting every question, we focused on the highest-volume tasks using years of request data. We started with simple, repeatable needs, then moved to more complex metrics like span of control and attrition. This improved accuracy and built trust with early users.
Integration was equally important. If Data Agent required a new tool or workflow, adoption would drop. We embedded it into our primary employee experience platform so it became part of normal processes, and into collaboration tools so employees could use it naturally within their conversations.
Measurement That Drives Momentum
Once employees started using Data Agent, it became easier to understand how it was working. Adoption became one of the strongest signals of value. We tracked it closely through our control tower to see whether people found the tool useful and where it was making an impact. We also consistently reminded employees to use Data Agent when data questions came in through other channels.
Recent examples showed the shift. A leader needed to know which employees had the longest tenure in an organization. Previously, that required contacting an analyst, waiting for data and reviewing a spreadsheet. With Data Agent, the answer came back instantly. Another example came as we worked to improve span of control. Every leader can now get their data with a simple query rather than waiting for manual support, and this can happen in real time during meetings.
Moments like these showed real progress. It wasn’t only about saving analyst time. It changed how people accessed information and how quickly they could act.
In addition to adoption, we tracked accuracy, response time and employee experience. These metrics helped refine the context layer, identify new use cases and determine which ones weren’t valuable enough to keep. Measurement shaped our roadmap and made progress visible.
Related Article: Why Bad Data Is Blocking AI Success — and How to Fix It
Role Redesign and Enablement
As AI began handling basic reporting, analytics roles started to change. Analysts who once spent much of their time on routine tasks shifted toward work that required interpretation and judgment. Instead of repeatedly pulling data, they explored trends, evaluated how new skills influenced performance and supported leaders with strategic insights. They also contributed to building, maintaining and operating Data Agent and other AI agents, which became an important reskilling effort.
Early-in-career roles evolved as well. Historically, new analysts learned by manually gathering data. Now they learn by helping build and improve AI use cases. They train the agent to answer questions across the organization, expand coverage to new topics and strengthen the accuracy of its responses.
This evolution only works when organizations design for it. People need clarity on how responsibilities will shift, which skills the new work requires and what training will help them grow. Without that clarity, teams try to juggle both their old work and new responsibilities, or they avoid implementing AI altogether. This creates confusion and slows progress.
Training and enablement were essential as responsibilities changed. We used skill assessments, structured learning paths and platform credentialing to help people build confidence with AI-enabled workflows. Employees also needed to see AI used in real decision-making. When leaders used the tools themselves, it signaled that this was not a temporary experiment but part of how we would operate going forward.
Long-term adoption depends on steady reinforcement. Teams need to know when to use AI, how to evaluate output and how to escalate issues when something does not seem right. Building that muscle takes practice and transparency. These are skills analysts have historically mastered, but now every employee needs them. When role redesign and enablement move together, people adapt faster, trust grows and AI becomes part of everyday work and delivers value to the business.
Transformation as a Living System
Projects like Data Agent don’t reach a finish line. They evolve as the organization changes, as employees learn and as new capabilities emerge.
Organizations that focus on clear use cases, early alignment, practical integration, thoughtful measurement and intentional role redesign avoid the pattern of AI pilots that never scale. They make steady progress, build trust and create space for people to do more meaningful work. Companies that build these habits now will be ready for the next wave of AI innovation and the new possibilities it creates for how work gets done.
Learn how you can join our contributor community.