Picture this: You’re a job candidate staring down two interview invites. One is from a recruiter you barely know who may be juggling dozens of reqs. Another is from an AI system promising speed, consistency and fair judgment.
Which one would you pick? Which one do you think most people would pick?
Probably the human, right? For all the inadequacies and imperfections of our fellow people, human connection seems to be a good bet and a more sure way of getting an interview.
New research shows that this once safe assumption is wrong.
Applicants Say: Give Me AI
A new paper from Brian Jabarian from the University of Chicago’s Booth School of Business and Luca Henkel from Erasmus University Rotterdam is one of the first deep dives into candidate behaviors as it relates to AI in the hiring process, and the results were a surprise.
Let’s start with a fact that should make every talent leader sit up: when over 70,000 applicants were offered a choice between a recruiter and an AI interviewer, 78% chose the AI. That single decision upends comfortable myths about “high-touch” recruiting and whether people would reject a robot screening process.
Here’s the punchline: the AI-led route didn’t just feel different. It was different. And not in a bad way, either.
- Offer rates rose 12% (from 8.70% with humans to 9.73% with AI).
- Job starts rose 18% (from 5.63% to 6.62%).
- 30‑day retention rose 17% (from 4.62% to 5.42%).
For a function that spends millions to squeeze out single‑digit improvements, those are big swings.
The Perception Flip
That doesn’t make AI interviewing perfect. Candidates reported that the AI conversation felt less natural.
That’s because it is. Yet overall satisfaction and offer acceptance stayed level with human‑led interviews. The tradeoff made sense to them: procedural fairness over conversational charm. On the positive side, reports of gender‑based discrimination fell by nearly half, from 5.98% under human interviewers to 3.30% with AI.
Recruiters believed what most people probably did, too. Before seeing results, 36% predicted lower offer rates from AI‑led interviews and 48% predicted worse retention. The data told another story.
AI‑led interviews also produced more comprehensive conversations 42% of the time vs. 39% for humans and were able to cover more topics per interview (6.78 vs. 5.53). The transcripts showed more high‑signal exchanges and fewer low‑signal cues (fewer backchannels like “uh‑huh,” fewer applicant‑posed detours). Cleaner data in, sharper decisions out.
Recruiters noticed the quality difference in subtle ways, too. They scored AI‑interviewed candidates 2.01 out of 3 on average vs. 1.90 when they ran the interview themselves. Their written comments skewed more positive (31% vs. 24%) and less negative (28% vs. 38%). And when it was time to decide, they re-weighted evidence: interview scores mattered less, while standardized language scores mattered more.
There are two caveats worth noting. The first is that 7% of AI interviews hit technical snags, and that’s certainly a problem with any tech-enabled hiring. The second caveat is that 5% of applicants hung up because they refused to speak to AI. That’s surprisingly low.
An Unexpected Bottleneck
AI is available at any time, and that really matters. The median time to interview fell from 0.51 days (human) to 0.32 days (AI), and candidates moved faster through the first stages of the interview.
But something else surprising happened as well: The time from interview to offer ballooned from 2.58 days (human) to 6.69 days (AI). Why?
Researchers found that recruiters had to review transcripts they didn’t personally conduct. The bottleneck migrated instead of vanishing. The net result: successful candidates reached start dates slower with AI (22 days) than with humans (19 days). In the end, the speed didn’t materialize in the way one might hope.
Leaders love a glossy demo and the promise of automating something that they are currently paying people to do. They also forget queueing theory. If you automate step one and starve step two of capacity, step two becomes your new chokepoint.
This isn’t a simple game of replacing recruiters with AI. Instead, this research suggests that you can get the best of both worlds: AI-driven screening and early touches that improve results and more people devoted to later-stage recruiting, where the decision is made.
What Smart Organizations Should Do
The best move for organizations isn’t to pick a side, but to rewire how work gets done. Early interviews could be AI by default. This is where bias creeps in and structure and scale matters most.
The evidence is clear: AI collects the signal you need and strips away the noise.
But automation creates a new bottleneck. Companies that win won’t let them sit. They’ll allocate reviewer capacity, set turnaround SLAs, and arm people with lightweight rubric tools so quality doesn’t collapse under speed.
That also means retraining the recruiter role. Gatekeeping on first screens is a dead-end identity. The real work lives downstream: being data-literate decision partners, coaching candidates, shaping offers, and delivering the high-touch moments that decide whether someone accepts and sticks.
Transparency with candidates matters just as much. Tell them why AI is there (fairness, consistency, speed, and scale) and be explicit that humans still make the final call. Share what happens next and how to prepare. Respect breeds trust, and trust carries into tenure.
None of this works if you don’t instrument the experience.
Track NPS, perceived fairness, and dropout reasons stage by stage. Look closely at the people who refuse AI and where tech fails. Fix what you can, own what you can’t.
Candidates may call AI “less natural.” They still preferred it, finished interviews sooner, accepted offers at the same rate, and stayed longer. Fairness, speed, and clarity outrank chit-chat in earlier stages. Warmth isn’t useless, but it is situational. It seems to be most valuable in later stages: manager screens, team panels, final decisions, pre-start touches.
AI Interviews: Temporary Shift or Something More Permanent?
People didn’t choose AI because they adore robots. They chose AI because it felt like a fair shot. Surprisingly, the outcomes backed that feeling up.
But it might not always be that way. Consumer tastes can be fickle and contextual. It’s hard to imagine that an AI interview would’ve been accepted as much five years ago. It also might not work as well when candidates hold the balance of power in a competitive hiring environment.
This debate isn’t just about hiring mechanics, though. It’s about trust in institutions, fairness in opportunity, and the uncomfortable reality that many candidates feel burned by human gatekeepers. When a machine feels like a safer choice than a recruiter, we’ve got some work to do.
The question for leaders isn’t whether AI interviews are here to stay. It’s whether your recruiting function will evolve fast enough to keep up with the changes that come with them.
Editor's Note: Read more thoughts on the use of AI in recruiting:
- Tips to Tell the AI Snake Oil From the Real Deal — The latest wave of AI can do things we couldn't have imagined even five years ago. But not all AI is the same — some tips to tell the fakes from the real deal.
- Demystifying the AI Black Box in Talent Management — A recent study found that an AI judge deciding someone’s fate in a courtroom would provide a harsher outcome than a human judge 30% of the time.
- Workplace Politics vs. AI — No matter how sophisticated your AI tools are, they can't overcome the toxic power struggles, favoritism and biased decision-making that permeate organizations.