an org chart with a toy robot right smack dab in the middle
Feature

AI Agents Are Part of Your Workforce. Treat Them That Way

3 minute read
Mark Feffer avatar
By
SAVED
AI agents aren't just automation — they need clear task specs, evaluation criteria and human oversight to ensure ethical goal achievement.

A new software engineer recently began work at Goldman Sachs. In and of itself, that may not be news, except the engineer is an AI called “Devin” developed by the San Francisco AI lab Cognition. 

Goldman Chief Information Officer Marco Argenti told CNBC, “We’re going to start augmenting our workforce with Devin, which is going to be like [a] new employee who’s going to start doing stuff on the behalf of our developers.”

The use of AI agents is growing. According to Grand View Research, their global market size will surge from about $5.4 billion in 2024 to more than $50 billion in 2030. Cognition’s not alone positioning its AI as "team members" rather than "solutions." Another San Francisco company, Lindy, markets its automation tools with the tagline “your next hire isn’t human.” 

Established tech vendors are taking similar tacks. Last year, San Francisco-based HR solutions provider Lattice began creating employee records for digital workers on its platform. “Digital workers will be securely onboarded, trained and assigned goals, performance metrics, appropriate systems access, and even a manager,” wrote CEO Sarah Franklin. “Just as any person would be.” 

The New Kid on the Team

AI agents are currently being used to perform chores typically given to entry-level workers like drafting reports, addressing customer service issues or conducting initial interviews and resume screens.

But the fact AI agents are taking on relatively simple tasks doesn’t mean they can be left unattended, especially today, when the technology is still new. There’s a “significant gap” between the hype of AI and reality, observed author and automation expert Pascal Bornet. In the near term, at least, humans must step in when agents are hamstrung by nuance or unanticipated developments.

Put another way, AI agents need “supervisors.” According to PwC, new technology will evolve management roles so they include integrating agents and other “digital workers” into a company’s overall workforce strategy. Managers’ mindsets will have to shift, but in return they’ll gain greater flexibility in allocating resources to meet changing business conditions. 

As a part of that shift, managers must change how they regard their workforce. They should look at their teams as blended entities rather than groups of people and separate sets of agents. Also, they should see agents in the context of clear “roles,” and prepare for their arrival just as they’d prepare for a new employee. 

For example, agents require access to data and other content necessary to perform their tasks, and the ability to tap into other systems. Managers must be able to track an agent’s performance and gauge the quality of its work.

Rethinking the Employee’s Role

This requires managers to rethink their expectations for each role. It’s one thing to say recruiters will use AI, but another to define exactly how. And once you’ve defined it, how do you measure the performance of the person and the machine? Documenting the approach to such issues helps plan for the reskilling of current workers, who must refine their own approach to work by learning how to craft effective prompts, for example, and be ready to handle exceptions within an AI-automated process. 

Indeed, KPMG advocates for managing agents with the same processes as for human workers in onboarding, learning, upskilling and performance management. The firm even suggests agents be included on organization charts, with a clear view of who is responsible for supervising each one. Many experts believe companies should build “employee files” for each agent so managers can track usage, updates, training and human users.

Ultimately, each agent needs a human “manager” from HR, IT or the line of business itself. In addition to tracking performance, these human team members might monitor success rates and turnaround time, as well as user satisfaction scores, engagement rates, task-completion rates and error (and hallucination) frequency.

Learning Opportunities

AI Agents Aren't Set It and Forget It

AI agents aren’t simply about automation. They require precise descriptions of what each agent should accomplish, how they’ll be evaluated and who’ll be responsible for ensuring the agents are meeting those goals and behaving ethically. 

Getting the most out of AI agents involves more than simple implementation. It requires an understanding of the agent’s purpose, its operation and the people it will work with, and the ability to see all of that in the context of an overall team. 

Read more on trends in the AI agent space:

About the Author
Mark Feffer

Mark Feffer is the editor of WorkforceAI and an award winning HR journalist. He has been writing about Human Resources and technology since 2011 for outlets including TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages. Connect with Mark Feffer:

Main image: adobe stock
Featured Research