On a Tuesday morning in October, Professor Alvarez logs into her institution’s learning management system to prepare for the spring semester. She expects to find an empty course shell and a familiar checklist from the department. Instead, the LMS greets her with a “fully designed” syllabus, a suite of AI-generated assessments and prewritten discussion prompts, none of which she created, approved or even knew were in development. The system cheerfully reports that the content aligns with institutional learning outcomes and is “ready for deployment.” With one click, she could publish the course exactly as it stands. With another, she could spend hours dismantling the automated structure to reclaim control over her own curriculum. The clock is ticking; enrollment opens in three days.
This is not an isolated hypothetical — it reflects a measurable shift in curricular authority. In the study "AI-Driven Biases in Curriculum," empirical analysis revealed that when AI-generated course content is deployed without robust faculty oversight, algorithmic patterns can subtly reframe disciplinary priorities, flattening nuance in favor of standardized, platform-optimized outputs. These findings highlight the stakes: what appears as convenience for institutions can, without governance, become a quiet redirection of academic intent.
In higher education, a silent upheaval is underway. With little warning and even less transparency, artificial intelligence has crept into the core systems that define how students learn and how faculty teach. Learning Management Systems (LMSs), once glorified filing cabinets for syllabi and gradebooks, now promise to design courses, draft feedback, monitor engagement and predict retention. As AI-native platforms begin to dictate the structure and sequencing of learning, the question is who will control that automation — and to what end.
High Ed's Familiar Cycle of Disruption
This is not higher education’s first dance with disruptive technology. In the late 1990s and early 2000s, online learning was met with both excitement and skepticism. Faculty pushed back against commercial courseware, worried that pedagogy would be flattened into modular content. Institutions responded not by investing in faculty development, but by hiring instructional designers to “translate” course objectives into platform-friendly templates. The LMS became a gatekeeper, not a partner.
Fast forward to the early 2010s, when MOOCs (Massive Open Online Courses) threatened to decenter the role of faculty altogether. The panic subsided, but the model persisted and standardized content delivery at scale. Often divorced from institutional mission or academic nuance.
The current wave of AI-powered LMS tools, whether native or augmented (like Canvas AI, Blackboard with GPT or D2L with Creator+), marks a return to this logic: scale over substance, automation over academic judgment.
The difference? This time, the machines can write.
Related Article: What It’s Really Like to Teach College in the Age of AI
Research Warns of De-professionalized Teaching
The integration of generative AI into LMS systems is a pedagogical one. According to a recent study, the infusion of AI into teaching platforms was once experimental, but now faculty are increasingly bypassed as systems automate syllabus generation, assessments and learning objectives.
Institutions report deployment of AI tools in over 65% of new course shells on select campuses, yet faculty training uptake remains low, with one study stating that the biggest challenge around AI in teaching and learning is teachers' lack of preparedness and knowing how students would use AI tools.
In another study exploring the impact of AI on teaching and research, 75% responded that they do not offer incentives to encourage faculty to use AI.
The consequence of this gap is structural failure with far-reaching implications. In the rush to embrace efficiency, institutions are delegating curricular authority to algorithms trained on engagement metrics, not academic standards. Recent research found that AI-generated course materials embedded in LMS platforms often reproduce generic outcomes from Bloom’s Taxonomy, missing the nuance required for meaningful learning. Faculty reviewers described these outputs as “mechanistic” and “lacking cognitive depth,” raising alarms about the de-professionalization of curriculum design.
The Lure — and Risk — of Automated Teaching
The allure of AI-powered LMS platforms is easy to understand. In a sector constrained by shrinking budgets, enrollment cliffs and staff shortages, automation feels like salvation. Why spend hours building rubrics or rewriting outcomes when Canvas AI can do it in 60 seconds? Why wait for faculty committees to vet curriculum when Anthology GPT can generate aligned materials at scale?
But in this new order, platforms dictate the pace — and faculty must follow or fall behind. Research on instructor perceptions of AI tools reveal a growing tension: professors report being “nudged” toward AI-generated course templates and assessments calibrated for platform analytics, even when those templates conflict with disciplinary best practices.
Student Perspectives on AI in Education
From the student perspective, the growing reliance on generative AI tools within course delivery is becoming increasingly visible and sometimes concerning. A widely publicized incident at Northeastern University illustrates this tension.
According to NewsNation, a student formally requested a tuition refund after discovering that her professor had used ChatGPT to generate lecture content. She identified numerous irregularities, including misspelled words, unedited AI prompts and distorted images, such as those featuring individuals with extra limbs, indicative of generative AI usage. The faculty member later acknowledged using ChatGPT and committed to greater transparency.
Other students interviewed for the same report voiced broader concerns about the erosion of faculty engagement. Several described experiences where instructors used AI-generated grading and feedback in lieu of direct instructional interaction. One student even reported transferring institutions after encountering repeated instances of what was perceived as faculty abdication of academic responsibilities to automated systems. These accounts reflect a growing disillusionment among students who expect meaningful faculty presence and pedagogical integrity in higher education.
Personalized Learning or Impersonal Teaching?
The irony is not lost on students or faculty: while institutions attempt to control or penalize student use of generative AI to enforce academic integrity, which they are required to do, they simultaneously adopt the same tools behind the scenes, often without faculty consent or student awareness.
Studies suggests that the overuse of generative AI may impair higher-order thinking. Research cited from the MIT Media Lab shows that users who trust generative AI tools too readily demonstrate reduced critical thinking skills compared to those who engage in manual problem solving.
In this environment, the promise of “personalized learning” quickly becomes a euphemism for impersonal teaching. Faculty are expected to monitor behavior data, adjust to algorithmic nudges and adopt content created by systems they had no role in configuring. This is not pedagogical enhancement, rather pedagogical displacement.
The Governance Gap in AI Curriculum
The problem is not AI itself. When thoughtfully applied, AI can enhance feedback loops, support struggling students and reduce faculty administrative burden. The core challenge lies in governance, ethical safeguards and the quiet embedding of bias into curriculum.
Generative AI does not simply process neutral information; it learns from datasets shaped by historical, cultural, and commercial priorities — biases that, once embedded in a Learning Management System, can influence what knowledge is emphasized, whose perspectives are centered and which disciplines are devalued. Left unexamined, these automated choices can redirect academic focus without the awareness or consent of faculty.
Research demonstrates that AI-generated course outlines disproportionately replicate dominant cultural narratives and underrepresent marginalized perspectives, especially in humanities and social sciences. In the study, AI course generators consistently favored high-enrollment, commercially viable topics over niche or critical inquiry areas, reinforcing market-driven priorities rather than mission-driven ones. Such skew not only shapes learning outcomes but can gradually shift an institution’s academic identity.
Embracing Ethics in AI Course Design
Ethical stewardship demands more than technical deployment; it requires transparent standards for authorship, credit and accountability, paired with ongoing audits of both content and curricular impact. Yet few institutions have formal policies guiding how AI-generated curriculum should be reviewed, approved or attributed.
According to a 2024 Survey of campus CTOs and CIOs, 54% of CTOs say their institution has not adopted policies or guidelines in any of these critical areas. This governance gap allows AI systems to make curricular decisions that would never pass through formal academic channels if made by humans.
The path forward must integrate bias detection into every stage of AI adoption, establish review protocols that safeguard disciplinary integrity and embed human oversight into platform design. Without these guardrails, higher education risks ceding not only pedagogical agency but also the curricular compass itself — allowing algorithms to steer academic priorities toward whatever the data favors, rather than what the academy values.
Related Article: Higher Education’s AI Dilemma: Powerful Tools, Dangerous Tradeoffs
Choose the Future — Or Have It Chosen for You
The AI-powered LMS is here to stay. But the way institutions integrate, govern and resist its overreach will determine whether it becomes a tool of empowerment or erasure. Faculty must reclaim their role not just as content creators, but as pedagogical architects. Institutions must stop treating curriculum as code to be deployed and start treating it as a practice to be stewarded.
Automation without intention is abdication.
If higher education fails to build guardrails, transparent policies, shared governance mechanisms and requirements for human oversight, then it will wake up to a reality where learning is scalable, trackable and utterly lifeless.
The platform is hungry. Do not let it eat the academy.
Learn how you can join our contributor community.