Approved stamp on application form or business agreement document
Editorial

What Every President & Provost Should Demand Before Buying 'AI-Enabled' Anything

4 minute read
Emily Barnes avatar
By
SAVED
Before approving AI tools, college leaders must ask: what records do these systems create — and can we govern them?

Presidents and provosts are increasingly asked to approve AI-enabled systems that promise improved retention, personalized learning and operational efficiency. These proposals often arrive with urgency: peer institutions are adopting similar tools, vendors claim demonstrable gains and delay is framed as institutional risk.

Empirical research suggests the opposite. When AI-driven analytics and predictive systems are adopted without robust governance, institutions create new forms of academic judgment, new classes of student records and new inequities and often without transparency or recourse for students.

AI in higher education is not simply a tool. It is decision-shaping infrastructure, and under US law, infrastructure that produces student-identifiable records is governed by FERPA.

Table of Contents

Start With Records, Not Features

The most important question senior leaders can ask is not what does this system do? but what records does this system create?

FERPA governs any record directly related to a student and maintained by an educational institution or its agents. Empirical studies of learning analytics demonstrate that modern AI systems routinely generate derived records — risk scores, engagement classifications, persistence predictions and intervention recommendations — that go beyond descriptive reporting and materially influence academic outcomes.

Once these outputs are retained or retrievable, they meet FERPA’s definition of education records regardless of whether they resemble grades or transcripts.

Research further shows that these inferred records often become more influential than the underlying raw data, particularly in advising and retention contexts.

Demand Clarity on Inference, Not Just Accuracy

AI-enabled systems increasingly rely on predictive modeling to infer student success, risk and likelihood of completion. Large-scale empirical reviews confirm that such models directly shape advisor and instructor behavior, often acting as de facto decision guides.

The concern is not prediction per se, but delegated academic judgment without governance.

Recent studies show that predictive systems embed normative assumptions about what constitutes “success” and “risk,” frequently reproducing structural inequities related to race, socioeconomic status and first-generation status.

Presidents and provosts should require clear answers to:

  • What inferences does the system generate?
  • Who approved the model logic and training data?
  • Can students inspect, contest or correct those inferences?

Absent such mechanisms, institutions are not augmenting judgment. They are outsourcing it.

Related Article: When AI Is FERPA-Compliant — and When It Is Not

Treat Vendor Access as a Legal Delegation

FERPA permits disclosure of education records to vendors only under narrow conditions, most commonly the school-official exception, which requires direct institutional control, a legitimate educational interest and limits on redisclosure.

Empirical governance research demonstrates that institutions frequently lose effective control once analytics vendors retain data for benchmarking, continuous model tuning or cross-institutional comparison. These practices transform vendors into quasi-registrars without registrar-level accountability.

The US Department of Education’s Student Privacy Policy Office has emphasized that written agreements must specify purpose limitation, access controls, retention periods and destruction requirements. Empirical studies indicate that most learning-analytics contracts fail to meet these standards in practice.

Do Not Accept 'De-Identified' Without Contextual Proof

De-identification is frequently offered as a compliance safeguard. Under FERPA, however, data remains personally identifiable if a student’s identity can be reasonably inferred from context.

Recent empirical work demonstrates that in institutional settings (e.g., small programs, specialized majors, cohort-based dashboards), re-identification is often trivial even when direct identifiers are removed.

Dashboards that allow filtering, drill-down or role-based segmentation routinely defeat de-identification claims. FERPA evaluates identifiability in practice, not in theory.

Accreditation Was Not Designed for Algorithmic Judgment

Accreditation frameworks presume human oversight, documented academic decision-making and reviewable processes. Empirical research confirms that AI systems increasingly influence assessment and progression decisions before institutions examine their effects. 

When AI-generated judgments cannot be documented, explained or reviewed by faculty, institutions risk misalignment with accreditation expectations, regardless of performance metrics.

What Responsible Leadership Looks Like

High-quality research converges on a clear conclusion: effective AI governance in higher education is not a technical problem; it is a leadership responsibility.

Presidents and provosts should insist on:

  • Explicit classification of AI outputs as education records where applicable
  • Faculty-governed oversight of inference logic
  • Enforceable vendor controls on retention and secondary use
  • Mechanisms for student inspection and challenge
  • Periodic audits as systems evolve

These requirements slow adoption, but they prevent institutional harm.

Related Article: AI Student Success Tools Raise Fresh FERPA Questions

Learning Opportunities

The Line Leaders Must Hold

AI will continue to advance faster than policy. That reality does not absolve institutions of responsibility. It heightens it.

Empirical evidence is clear: analytics systems reshape academic decision-making long before their effects are fully understood. FERPA exists precisely to ensure that student rights are preserved during such transitions.

The question leaders must ask before approving any AI-enabled system is not will this improve outcomes? but: Can we govern the records and judgments this system creates?

If the answer is no, the institution is not buying innovation. It is buying unmanaged risk.

AI Capability x FERPA Exposure Risk Matrix

How to read the matrix: 

  • FERPA Exposure reflects whether the capability creates, infers or redistributes educational records
  • Institution Risk reflects governance, accreditation and litigation implications
  • Required Action indicates what leadership must do before approval

AI Capability Risk Matrix

AI CapabilityWhat the System DoesFERPA ExposureInstitutional RiskWhy FERPA Is TriggeredRequired Governance Action
Stateless AI AssistanceOne-time explanations; no memoryLowMinimalNo retained student-identifiable recordApprove with no-retention confirmation
AI Drafting SupportDrafts feedback or contentLow–ModerateManageableBecomes a record once storedRequire LMS-only storage + faculty review
AI Summarization of Student WorkCondenses assignmentsModerateModerateDerived academic content retainedTreat as education record
Persistent AI TutorRemembers student historyHighSignificantContinuous logging + inferencePause until FERPA framework exists
Predictive Analytics / Risk ScoringPredicts success or failureHighSevereInferred judgments = recordsProvost + counsel sign-off required
Student Success DashboardsAggregates and redistributes data HighSevereRedisclosure across rolesRole-based access + audit logging
Behavioral / Sentiment AnalysisInfers engagement or attitudeHighSevereBehavioral inference tied to studentFaculty governance + appeal process
Academic Integrity DetectionFlags cheating or AI misuseHighSevereDisciplinary recordsDue-process + formal policy required
Automated Remediation PathwaysAssigns learning pathsHighSevereAlters academic trajectoryAccreditor-ready documentation
AI-Assisted GradingGenerates grades or scoresHigh SevereCore education recordsFaculty approval + override
Vendor Analytics DashboardsVendor host insightsHighSevereVendor becomes record custodianContract renegotiation or reject
Model Training on Student DataImproves AI using student dataProhibitedCriticalSecondary use violates FERPADo not approve

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Emily Barnes

Dr. Emily Barnes is a leader and researcher with over 15 years in higher education who's focused on using technology, AI and ML to innovate education and support women in STEM and leadership, imparting her expertise by teaching and developing related curricula. Her academic research and operational strategies are informed by her educational background: a Ph.D. in artificial intelligence from Capitol Technology University, an Ed.D. in higher education administration from Maryville University, an M.L.I.S. from Indiana University Indianapolis and a B.A. in humanities and philosophy from Indiana University. Connect with Emily Barnes:

Main image: Brian Jackson | Adobe Stock
Featured Research