If it sounds too good to be true, it probably is. The old saying rings true in most cases, yet when it comes to AI solutions, all bets are off.
After all, the latest generation of LLM technology can do things we wouldn't have imagined five years ago. AI can generate blog posts in as little as 10 seconds, summarize long documents, and extract key information within seconds, saving workers and others hours of reading.
AI can make a more accurate medical diagnosis than a physician 80% of the time, according to Harvard Health. Multiple peer-reviewed studies and real-world implementations have confirmed that AI processes medical images significantly faster than human experts — think seconds versus minutes or hours — while maintaining comparable (and sometimes superior) accuracy.
Amazing as all that is, there are other cases, such as HireVue's (partially recalled) claim that its algorithmically-rated video interviews can predict candidate success more accurately than humans. And with no bias.
AI Snake Oil or Savior?
While some employers may not hold those kinds of claims suspect, many jobseekers and experts like Princeton University's Arvind Narayanan and Sayash Kapoor do. In the overview of their 2024 book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference," they write that just because companies use predictive AI, "that doesn't mean it works."
They go on to argue that one area of AI snake oil concentration is in hiring, a dangerous sphere given how much it affects people's lives and careers.
But there isn't a better way, argue some HR managers and industry experts, pointing out that these AI interviews, biometric measurements and other types of assessments (game playing and such) can happen any time and place, making them more convenient for all. Not only that, but they are more fair given that competing job seekers are asked the same questions and graded using the same algorithms, making hiring more objective and efficient.
Fast-Track Hiring: Efficiency or Discrimination
Not only that, but AI-based hiring is less expensive and more efficient, they say. So are AI video interviews and biometrics good indicators of high-quality workers?
We asked Holger Mueller, vice president and principal consultant at Constellation Research. "Yes, even the banal stuff," he said, adding that he once heard "crazy" interview numbers from a hotel chain. "They seemed mathematically not possible …," he explained.
But after talking with managers, he learned that applicants with visible tattoos and piercings were automatically disqualified. This means that a "no" decision took 2-3 seconds versus the 20-30 minutes an interview would have taken.
Mueller also added that "body language, gestures are totally different than a written resume .…" It's just cheaper than observing these things in person and you can stop watching video interviews anytime. So in one person's view video interviews are snake oil while in another's they are a valuable, effective time-saving hiring mechanism.
When Algorithms Judge: The Human Cost
Research by critics, such as Dr. Joy Buolamwini, author of "Unmasking AI: My Mission to Protect What Is Human in a World of Machines," has found that in some cases, the same kind of AI can reinforce bias, leading to lawsuits and harming job seekers.
Discrimination happens with other brands of AI-tech too. Take, for example, a legal case against HR technology vendor Workday where a job applicant claims the company's hiring tools showed prejudice against him and other job seekers based on age, rejecting them and others over 40 instantly.
The applicants are engaged in a class-action lawsuit for all affected applicants since 2020. Workday has argued that it can't be sued under age-based discrimination laws because it's a vendor, not an employer. But U.S. District Court Judge Rita Lin ruled that Workday acts as an employer's agent, allowing the case to proceed.
"The (lawsuit) plausibly alleges that Workday's customers delegate traditional hiring functions, including rejecting applicants, to the algorithmic decision-making tools provided by Workday," wrote Lin.
Legal Landmines: Who's Responsible When AI Goes Wrong
So if you're an enterprise in the process of hiring, there's not only a legal danger from jobseekers who believe they've been wronged by AI but also from vendors who will point fingers at you as responsible for the harms their technology caused.
How can we tell if claims like "fully automated hiring" or "100% unbiased AI" deliver as promised? AI ethicist Cathy O'Neil recommends taking a look at the algorithms. "Algorithms automate the status quo, including hidden biases," she said.
Recruiting tools aren’t the only ones worth a closer look. AI is already causing harm, particularly in how it's being built, marketed and used in banking, education, hiring, insurance and criminal justice, according to Narayanan and Kapoor.
Their book explains the crucial differences "between types of AI, why organizations are falling for AI snake oil … why AI isn't an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own."
Given all of this, what can AI buyers do to tell the difference between AI Snake Oil and the real deal, the helpful kind?
Spot the Fakes: Your AI BS Detector
Fake AI ("Snake Oil") relies on vague claims, doesn't explain methodology, and often uses manual processes behind the scenes, according to Michelle Duval, founder and CEO at Marlee, an Australian collaboration and performance AI for individuals and teams.
Don't Buy Into the GenAI Jargon. "With GenAI, stories are now nearly free to produce at scale, so avoid them," Tmpt.me founder Scott Zimmer told Reworked. He built his career in corporate management at companies like Capital One, Truist, Verizon and more.
"Instead, observe: is the AI service leading with promises or demonstrations? Better than demonstrations, don't be afraid to ask: can I try it?" said Zimmer. “It has never been easier to assemble working prototypes of any solution or service, so if you can't try it, you have good reason to be suspicious that the seller is looking for you to fund the build out, which positions you to take the risk, not them.”
Prove it. "Are there demonstrable results? AI must deliver measurable improvements over non-AI solutions," said Duval.
Beware of a Wrapper. Where is the data coming from? The idea is that if they don't have any proprietary data, then they might just be putting a wrapper or typing a prompt over something like ChatGPT, Claude or the like. It's something that can take as little as a few days to build and, in many cases, something an enterprise can do on its own.
Get Naked. Ask about data transparency. Does the vendor "explain its training data, biases, and limitations?" offered Duval. Are the values behind the training data transparent? For example, if there is bias, is it acknowledged?
Look Under the Hood. Know the underlying data model. "Ask: how is the underlying model trained," Zimmer advised. "Is the data source unique and proprietary, or generic and public? Is the owner of the data clear, or uncertain (and perhaps controversial someday as we saw with the New York Times suing OpenAI)?"
Feedback Is a Must. Are the AI responses recorded and auditable? "Better than auditable, if the AI delivers an output that doesn't match expectations, can it be corrected?" asked Zimmer. "By whom? You want to know that the people in the feedback loop are people you can trust."
Keep It Fresh. Beware of the echo chamber effect: Closed data vendors use systems that continuously learn from their own outputs. These can amplify small initial biases over time, as they keep selecting similar candidates who then become the new training data.
Editor's Note: Read up on other discussions of AI in HR:
- If We Want AI to Help HR, HR Has to Join the Conversation — Engineers are designing AI systems to address problems that are rooted in the very systems HR understands best.
- Is AI Good or Bad for Recruiters? It's Complicated — AI can be of great support to recruiters, but applicants are now also using the technology to improve their chances. Some recruiters say that’s a problem.
- 2025 Predictions for AI in Work Tech — HR leaders need to understand how AI can unlock new possibilities. Here we explore some of the most promising applications.