Albert, a dedicated undergraduate, was blindsided when his university accused him of using AI to write an essay. His alleged offense? Using common academic phrases like “in contrast” and “in addition to.” With no option but to defend himself before a disciplinary panel, Albert had to prove that his own work was, in fact, his own.
His case is not unique. As generative AI use grows, universities are increasingly relying on AI detection tools to curb academic misconduct. However, these systems are not only flawed but are also eroding trust between students and institutions.
According to the Higher Education Policy Institute, 88% of students now use generative AI in their coursework — up from 66% in 2024. And 18% admit to outright cheating, a 13% increase since 2023. Meanwhile, AI detection tools, designed to preserve academic integrity, still disproportionately flagged non-native English speakers and neurodivergent students, raising serious concerns about fairness and bias.
In my study, "Evaluating Methods for Assessing Interpretability of Deep Neural Networks (DNNs)," co-authored with Dr. James Hutson, we explore the unintended consequences of AI-driven decision-making and the ethical imperative for transparency. If AI is to shape education, it must do so in a way that is fair, explainable and aligned with the principles of higher learning.
Why AI Writing Detectors Fall Short
Universities are scrambling to contain AI’s influence on academic integrity, increasingly relying on AI detection tools such as Turnitin’s AI Writing Indicator, ContentDetector.AI or ZeroGPT. However, despite their widespread use, recent research indicates that AI-text classifiers cannot reliably detect generative AI use in schoolwork. These tools frequently misclassify human-written work as AI-generated and can be easily bypassed with minor text modifications. While students increasingly adopt AI as a writing aid, educators remain divided — some treating AI use as an inevitable evolution in learning, while others view it as plagiarism.
The problem extends beyond unreliable detection: AI classifiers are disproportionately flagging marginalized students, raising serious ethical concerns. False positives can result in emotional distress, academic penalties and long-term consequences, disproportionately affecting non-native English speakers, Black students and neurodiverse learners. In response, some institutions have chosen to disable AI detection tools altogether.
Vanderbilt University, for instance, opted to “disable Turnitin’s AI detection tool for the foreseeable future” due to concerns over false positives and a lack of transparency. While Turnitin claims a 1% false positive rate, the scale of its implementation makes this figure significant — had the tool been active in 2022, when Vanderbilt submitted 75,000 papers, an estimated 750 students could have been falsely accused of AI plagiarism.
Related Article: Higher Education’s AI Dilemma: Powerful Tools, Dangerous Tradeoffs
The Real-World Impact of AI Plagiarism Detection on Students
Multiple studies have revealed significant ethical concerns surrounding AI detection tools, particularly their tendency to reinforce biases and disproportionately impact marginalized students:
Disproportionate Flagging of Non-Native English Writers
A Stanford University study evaluating seven AI detection tools falsely flagged 61.2% of Test of English as a Foreign Language (TOEFL) essays, raising concerns about fairness in academic evaluation. In comparison, the detectors were "near perfect" when evaluating essays from US-born students.
Racial Bias in AI Plagiarism Accusations
A Common Sense Media report revealed that Black students are more likely to be accused of AI plagiarism by their teachers. One likely cause is the way AI detectors are trained — relying on homogenized datasets that fail to account for diverse writing styles. Students who use straightforward vocabulary, mechanical structure or less complex sentence variations are at higher risk of false accusations.
Heightened Risk for Neurodivergent Students
AI detection tools disproportionately flag neurodiverse students whose writing may not conform to conventional academic norms. A well-documented case involved a 24-year-old student at Central Methodist University who, despite her autism spectrum disorder affecting her writing style, was falsely accused of AI plagiarism and initially received a failing grade.
AI Detectors Are Easily Evaded
While falsely flagging genuine student work, AI detection tools also struggle to catch actual AI-generated content. Studies show that minor text modifications can reduce detection accuracy to 17.4%, allowing intentional misuse to go unnoticed while honest students face unwarranted penalties.
The consequences of these flaws extend beyond academic policy. They cultivate an environment of suspicion and fear. Students now hesitate to use legitimate writing aids like Grammarly, wary of being flagged for misconduct. Instead of promoting learning, AI detection risks penalizing students for writing well while reinforcing systemic biases in education. Institutions must critically evaluate whether these tools are genuinely preserving academic integrity or simply creating new inequities under the pretense of enforcement.
The Cost of Replacing Human Judgment With AI
The real danger isn’t just students misusing AI, it’s institutions misusing AI in ways that compromise fairness and due process. Automated detection systems operate with opaque algorithms, also known as black box algorithms, making it impossible for students to understand or challenge accusations. In Albert’s case, a simple conversation with his professor before an official hearing could have resolved the issue. Instead, students like him are left defending their academic integrity against black-box accusations.
Additionally, AI’s role in academic support is deeply contradictory. Some institutions, like Cambridge University, have embraced AI as a collaborative learning tool, guiding students in concept exploration and time management. Others view it as an existential threat, clamping down on usage while simultaneously using AI to generate course content, administrative responses and even exam questions. This double standard is causing frustration among both students and faculty.
How Universities Can Rebuild Trust in the Age of AI
Higher education must navigate AI’s impact without sacrificing trust, equity or critical thinking. We propose the following:
- AI Literacy Training: Universities must educate students and faculty on ethical AI use, ensuring clear guidelines on what is acceptable rather than relying on faulty detection tools.
- Due Process & Transparency: Accusations of AI misuse should require human review, with students given a fair chance to defend their work before disciplinary actions.
- AI-Positive Policies: Instead of fighting AI, institutions should integrate it into the curriculum, teaching students how to engage with AI responsibly — just as they are taught to cite sources properly.
- Ethical AI Integration in Assignments: Incorporate AI tools into coursework as a learning opportunity, guiding students on responsible usage, critical evaluation and proper citation of AI-generated content. Shift the focus from punitive measures to skill development by adopting alternative grading methods that emphasize comprehension, critical thinking and academic growth over rigid assessment metrics.
Related Article: Students Speak Out: AI Is Changing School, and No One's in Charge
Conclusion: AI as a Tool, Not a Threat
Albert eventually cleared his name, but the experience pushed him to transfer to another university. His story highlights a growing crisis in higher education — not just about AI, but about the erosion of student-institution trust. AI should support learning, not undermine it, and universities must strike a balance between maintaining academic integrity and protecting students from flawed detection tools.
The question is not whether AI belongs in education, but whether universities will use it ethically, transparently and fairly.
Learn how you can join our contributor community.