A lock on a gate
Editorial

The New Gatekeepers: When AI Agents Decide Who Gets In

5 minute read
Emily Barnes avatar
By
SAVED
From hiring to lending, AI agents are reinforcing exclusion through biased data and opaque decisions. Can fairness catch up to capability?

When artificial intelligence systems begin to “act” in human domains, evaluating candidates for jobs, admission, loans or parole, they quietly become gatekeepers. Yet this shift is happening behind closed doors, with little scrutiny of whether these systems amplify discrimination under the guise of efficiency.

The research, my own and others, reveals a contradictory irony: we demand fairness in policies, not in the very systems executing them. As AI agents assume power, we must ask: have we truly prepared them for just decision-making?

When AI Gatekeepers Echo Society’s Oldest Prejudices

AI agents learn from historical data, and if that data reflects societal bias, the AI amplifies it. In recent research led by colleagues at the University of Washington, state-of-the-art large language models like GPT-4, Claude and open-source variants were tested on over 500 anonymized resumes. The result? Names signaling whiteness or male gender were nearly always favored (85% and 89%), while black male names were never preferred to white male ones. Despite claims of “fair AI,” the findings show that perhaps these agents function better as digital reflexes for historical prejudice.

This pattern of exclusion is statistically demonstrable. In their 2024 empirical working paper "A Discrimination Report Card," economists Patrick Kline, Evan K. Rose and Christopher R. Walters analyzed over 83,000 hiring decisions made through a large-scale field experiment involving fictitious job applicants across major US cities.

They found that Black applicants faced substantial and persistent discrimination in callbacks and hiring, particularly in sectors such as retail and hospitality. These disparities were consistent across firm types and geographic regions. This dataset now informs the ways AI hiring models are “trained,” meaning the exclusion is not only documented but algorithmically inherited.

Related Article: How Does Bias Get Into AI, Anyway?

Loan Bias Reveals AI’s Deeper Systemic Flaws

The issue extends beyond hiring. A study in collaboration with researchers from Lehigh University deployed OpenAI’s GPT 3.5 Turbo (2023 and 2024) and GPT 4, as well as Anthropic’s Claude 3 Sonnet and Opus and Meta’s Llama 3-8B and 3-70B on realistic loan applications.

Despite identical financial profiles, Black applicants received higher denial rates or less favorable loan terms at significantly higher rates. A claim that AI is neutral crumbles under inspection: agents with decision autonomy merely harden the biases already embedded in training data.

Together, these findings present us with a profound and disquieting reality: from job applications to admissions and lending, AI agents do not begin with a blank slate. They begin with an archive of exclusion, codified into pattern-recognition logic. And when that logic becomes a filter for future opportunity, discrimination is no longer implicit, it is automated.

Recursive Bias: When AI Learns from Its Own Prejudices

One of the most insidious features of AI agents is their propensity to entrench and amplify structural inequalities through what researchers describe as recursive feedback loops. A recent study from the University of Zurich and ETH Zurich provides a rigorous taxonomy of these loops, demonstrating how outcomes from AI-driven decision systems feed directly back into the systems themselves, reinforcing bias over time. Their classification, grounded in dynamical systems theory, shows that even well-intentioned models can deteriorate into exclusion machines unless these feedback loops are addressed at design time. 

This dynamic plays out in financial systems across the globe. A peer-reviewed 2023 study examined bias in credit-scoring algorithms and concluded that their reliance on historical lending data enables them to reproduce and intensify previous patterns of exclusion: areas and populations historically underserved remain underrepresented in the data, leading to poorer outcomes for those same groups. In effect, the algorithm determines someone is “high risk” because fewer people like them were approved historically and then perpetuates that conclusion into the future.

The Silent Scaling of Structural Bias

This recursive injustice becomes particularly potent when AI agents continuously “learn” from their own outputs. At each decision cycle, rejected groups are excluded from training data updates, reinforcing and even escalating systemic prejudice. Without intervention, what begins as an attempt at automation becomes a self-fulfilling prophecy: the algorithmic system enforces exclusion in the name of learned efficiency.

In short, autonomous AI agents reflect historical bias. They harden it by embedding discrimination into algorithms, data pipelines and institutional workflows. What begins as automated efficiency metastasizes into epistemic injustice, perpetuating exclusion beneath a veneer of objectivity. And yet, as these systems cascade across sectors with little public scrutiny and even less regulatory friction, the industry marches on with its favorite refrain: optimization. Because evidently, codifying injustice is a small price to pay for innovation.

Related Article: Colonialism in Code: Why AI Models Speak the Language of Empire

No Human to Blame: Who’s Accountable for AI Decisions?

Unlike human decision-makers who can be questioned, interrogated and appealed, AI agents often operate inside opaque pipelines where accountability dissipates by design. In these systems, multiple models work in sequence, data flows automatically and decisions are rendered without meaningful documentation or audit trails.

Whether embedded in hiring systems, college admissions, financial risk assessment or parole review, each layer of automation feeds the next, which creates a recursive network of machine logic that resists transparency. Responsibility becomes untraceable, not unlike corporations that buy each other out, purchase services from their subsidiaries and rebrand profit as philanthropy. In this murky architecture of engineered amnesia, injustice becomes a feature, not a flaw. But hey, if it scales, what’s a little due process between friends?

The notion that AI is objective hides its opacity. A 2021 meta-analysis in Journal of Behavioral Decision Making confirmed that both professionals and lay users tend to trust AI recommendations, even when unaware of its inner workings. When companies deploy emotion-scoring tools like HireVue’s video interview AI, which has faced lawsuits from applicants with impairments and from Indigenous backgrounds, applicants have nowhere to trace decisions back to human oversight.

Who is accountable for these automated gatekeepers? The vendor, its customer or the candidate shut out of opportunity?

Big Tech’s Push for Agentic AI — Without Guardrails

It is not just risk, it is priority. OpenAI and Anthropic are accelerating agentic deployments with minimal guardrails. In April 2025, OpenAI CEO Sam Altman championed agent autonomy as a “multimodal leap forward” and a "critical leap for national security" while speaking at the Vanderbilt Summit on Modern Conflict and Emerging Threats — raising alarms among ethicists who pointed out that bias was still insufficiently addressed.

Meanwhile, DeepMind’s 2024 technical report introduced an “agentic decision framework,” a complex architecture designed to navigate human dilemmas autonomously. Yet the report conspicuously lacks any fairness constraints, suggesting a troubling blind spot even among researchers who are otherwise at the forefront of alignment conversations. 

In academic environments, too, the promise of fairness is falling short. A 2024 study from MIT evaluating AI tutoring systems found that even models retrained on diverse datasets disproportionately favored students from high-resource backgrounds, revealing that superficial recalibration fails where structural inequity persists. 

The High Price of Letting AI Evolve Unchecked

It is increasingly clear: bias mitigation is being left behind in the rush toward autonomy. Agentic AI is being optimized for capability before conscience, positioned as a national and educational imperative, yet falling dangerously short of equitable design.

If action is not taken now, AI agents will solidify into infrastructures of power that operate with autonomy but without accountability. We will have opaque decision-makers embedded in high-stakes systems, immune to scrutiny and insulated from redress. These technologies are fast becoming arbiters of trust, gatekeepers of opportunity and engineers of justice. But not through ethical design, rather through unchecked scale.

The cost of inaction is structural. And if these systems continue to evolve without deliberate intervention, we will have built machines that not only reflect our worst biases but encode them into the machinery of everyday life. Quietly, efficiently and irreversibly.

Related Article: MIT Researchers Develop New Method to Reduce Bias in AI Models

Learning Opportunities

Democracy Needs Gatekeepers It Can Question

We are handing over key aspects of our lives to AI agents. There is no returning to the status quo. These systems are gatekeepers now. And gatekeepers earn legitimacy only when they open their logs to scrutiny, accept responsibility and guard against bias.

Without structural fairness, AI isn’t neutral, it is exclusion by default. Our future depends on the choices these agents make. If we do not embed equity into their code, we risk trading progress for prejudice and trust for automation.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Emily Barnes

Dr. Emily Barnes is a leader and researcher with over 15 years in higher education who's focused on using technology, AI and ML to innovate education and support women in STEM and leadership, imparting her expertise by teaching and developing related curricula. Her academic research and operational strategies are informed by her educational background: a Ph.D. in artificial intelligence from Capitol Technology University, an Ed.D. in higher education administration from Maryville University, an M.L.I.S. from Indiana University Indianapolis and a B.A. in humanities and philosophy from Indiana University. Connect with Emily Barnes:

Main image: Trevor Cook on Adobe Stock
Featured Research