person standing in front of an electronic display that looks like lines of code
Feature

All AI-Based Monitoring Isn't Received the Same Way

3 minute read
Mark Feffer avatar
By
SAVED
Employees have very different reactions to AI-based monitoring tools rooted in whether they feel it increases or decreases their autonomy.

AI is changing so quickly that at times employers and vendors get so enthusiastic about an idea, they fail to consider all of its consequences. 

Vendors keep telling us AI’s essential mission is to help people handle more complex issues in less time — and that’s fine as far as it goes. But as with many efforts in today’s technology world, even simple missions come with unintended consequences. 

Intentions Matter. So Does Context

One case in point are AI solutions that monitor productivity and employer behavior. Researchers at Cornell University recently found use of these tools can decrease productivity and increase quit rates. In the study, the researchers told some people they'd be monitored by AI during a brainstorming session and others they would be monitored by a human. The contrast was striking: although both groups received the same feedback, the group where AI evaluated their performances showed greater resistance, generated fewer ideas and expressed a loss of autonomy. The second group showed little impact on results and much less resistance to the human feedback.

In another part of the study, the researchers told participants AI would monitor their work, but only to provide developmental feedback. In that case, participants did not display the same resistance or express a loss of autonomy as the previous participants had.

AI adoption therefore isn’t about the tool so much as what you intend to do with it, and how you go about introducing it. For this reason, analysts believe corporations can obtain employee buy-in by emphasizing that AI is being brought in to assist workers as they do their jobs, not to somehow rate or replace them.

Remote monitoring is another area where this comes up. As remote and hybrid work increases, employers need new approaches to keep track of what workers are doing, suggests Forbes in an article titled, "How to Destroy the Spirit and Competitiveness of a Company With AI." This requires a suite of tools that can communicate and analyze the hows and whens of employees doing their jobs. Remote monitoring can in theory solve this issue by allowing managers to “watch” in the background.

Of course, not everyone sees monitoring in the same light. Zoho, an India-based provider of cloud software for businesses, rejects monitoring as a solution. Co-founder and CEO Sridhar Vembu told Forbes the company doesn’t monitor employees or track potential customers, nor does it put cookies on its various web properties.

"The modern principle of ‘that which can’t be measured, can’t be managed’ is fundamentally bad," Vembu said. “Metrics are good for widgets. But this idea that you can apply endless metrics to improve human beings is what destroys the spirit … the spirit of organizations, the spirit of teams. If we keep our employees happy, they’ll keep customers happy. If you want to prevent customer attrition, you have to first address employee attrition.”

Related Article: Is Responsible Employee Surveillance Possible?

Understand How People Will Respond

Whatever technical path you’re pursuing, Cornell’s study found surveillance tools are particularly problematic in the context of monitoring. Technology is not the issue, but the reaction of those being monitored is. When machines track things like physical activity, vocal tones and verbal and written communications, people rightfully feel they lose autonomy.

Those concerns can be addressed, however, experts say. Employees are less leery of monitoring when they’ve been convinced the tools will help them improve their work, rather than judge their performance. It comes down to their desire for accuracy. Workers aren’t convinced that AI monitors understand context and accuracy. 

“When artificial intelligence and other advanced technologies are implemented for developmental purposes, people like that they can learn from it and improve their performance,” said one of the Cornell study’s authors, Associate Professor Emily Zitek. “The problem occurs when they feel like an evaluation is happening automatically, straight from the data, and they’re not able to contextualize it in any way.”

Learning Opportunities

The fear of being judged without context aside, the Cornell researchers found that those who were told they were being monitored by AI generated fewer ideas — indicating poorer performance.

“Even though the participants got the same message in both cases that they needed to generate more ideas, they perceived it differently when it came from AI rather than the research assistant,” Zitek said. “The AI surveillance caused them to perform worse in multiple studies.”

“Organizations trying to implement this kind of surveillance need to recognize the pros and cons,” Zitek said. “They should do what they can to make it either more developmental or ensure that people can add contextualization. If people feel like they don’t have autonomy, they’re not going to be happy.”

About the Author
Mark Feffer

Mark Feffer is the editor of WorkforceAI and an award winning HR journalist. He has been writing about Human Resources and technology since 2011 for outlets including TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages. Connect with Mark Feffer:

Main image: Chris Yang | unsplash
Featured Research