Artificial intelligence (AI) is reshaping industries, with video-based machine learning emerging as one of its most powerful yet controversial applications. These systems analyze human behavior through recorded footage, offering revolutionary capabilities for education, security and beyond. However, they also raise urgent ethical questions about privacy, consent and the potential for misuse.
As these technologies increasingly find their way into higher education, educational institutions must examine and proactively address the implications for students, faculty and institutions. They should evaluate these tools critically and implement safeguards to protect and honor student privacy concerns.
AI’s Window Into Human Behavior
Video-based AI systems employ advanced technologies like computer vision, natural language processing and reinforcement learning to analyze and adapt to human behavior. These tools are increasingly used in higher education to enhance student experiences and improve outcomes, but their implementation raises ethical questions.
A recent study on socially situated AI offers a glimpse at the potential for ethical applications of video-based AI. Over an eight-month experiment, an AI agent interacted with 236,000 users on a photo-sharing platform, learning to ask context-aware questions that improved its ability to identify visual concepts. The agent’s performance increased by 112% and outperformed traditional methods by 25.6%, demonstrating how AI can enhance learning through interaction rather than surveillance.
In higher education, adaptive AI models inspired by this approach could revolutionize student engagement. Imagine systems capable of observing students’ comprehension during lectures or assignments and offering tailored feedback to support their learning.
However, the same capabilities that enhance educational outcomes can blur ethical lines. For instance, when proctoring tools collect extensive behavioral data, students often have little understanding of how it’s stored, analyzed or shared. This lack of transparency not only compromises privacy but erodes trust in academic institutions.
Related Article: 3 Principles to Prioritize Ethics in AI
The Reality of AI and Recorded Data
AI systems require extensive data to train their algorithms, much of which is derived from recorded content. This reliance on surveillance data poses profound ethical and privacy questions, particularly in educational contexts where students may not fully understand the extent of monitoring.
In a 2023 case study, researchers Mark Swartz and Kelly McElroy highlighted critical concerns about the pervasive use of AI-driven surveillance technologies in universities. The study follows the narrative of a hypothetical university student subjected to constant monitoring through tools like Proctorio, an automated proctoring software. Proctorio analyzes facial expressions, eye movements and ambient sounds during online exams to detect potential cheating. While promoted as a safeguard for academic integrity, such invasive practices often leave students feeling scrutinized, powerless and distrusted.
Additionally, universities frequently enforce rigid policies for online engagement and exam proctoring, leaving little room for accommodations. When students advocate for privacy or request considerations based on religious or cultural beliefs, they are often met with ultimatums: comply or face denied admission or withdrawal from online courses. This approach creates an inaccessible, intolerant and discriminatory learning environment that undermines the inclusivity and equity that education should uphold.
Data collected by these systems can inadvertently reveal sensitive information — such as mental health struggles or socioeconomic status — through behavioral patterns. Furthermore, educational institutions often lack the infrastructure to secure this data, making it vulnerable to breaches and misuse.
Swartz and McElroy criticized "technosaviorism," a reliance on technological solutions that overlook ethical ramifications. Universities must evaluate these tools critically and implement safeguards to protect and honor student privacy concerns, ensuring that students are informed participants in decisions about their data. Transparency, accountability and robust privacy protections are essential for maintaining trust and upholding academic integrity.
Deepfakes, Bias and AI: The Ethical Minefield
Deepfakes, profiling and algorithmic bias present serious risks, particularly in higher education, where these technologies are increasingly utilized.
A 2023 study highlighting these dangers engaged university students in analyzing deepfakes — AI-generated media that appear authentic but are entirely fabricated. While students acknowledged the creative potential of deepfakes, they also identified their risks.
Deepfakes can be weaponized for disinformation, non-consensual explicit content and impersonation, which can undermine trust and irreparably harm reputations. For instance, some AI-generated videos have depicted public figures engaging in controversial or criminal acts, blurring the line between fiction and reality, and eroding trust in authentic information.
In higher education, the risks of deepfakes are compounded by algorithmic bias. AI systems trained on skewed datasets often misidentify or disadvantage individuals from underrepresented groups. For example, facial recognition software used in campus security systems has disproportionately misclassified people of color, reinforcing systemic inequities. When combined with deepfake technology, profiling can amplify misinformation and discrimination, creating a landscape where institutional trust and fairness are severely compromised.
Related Article: Navigating the New Landscape of Generative AI in Education
Balancing the Scales of Innovation and Ethics
Video-based AI holds immense potential to enhance education, streamline operations and improve lives. However, these benefits must be weighed against the ethical challenges of privacy violations, misuse and systemic bias.
Addressing these issues requires a multi-faceted approach:
- Transparency and Consent: Educational institutions and technology providers must ensure clear, informed consent processes for data collection and usage. Students should have a clear understanding of how their data is collected, stored and utilized, fostering trust in these systems.
- Accountability and Regulation: Policymakers need to enforce robust frameworks, such as Europe’s AI Act, to hold companies accountable for safeguarding sensitive data. Institutions should demand transparency and ethical compliance from technology providers.
- Inclusive and Fair Algorithms: Developers must address biases in training datasets and design algorithms that promote equity and representation. This requires intentional efforts to diversify datasets and ensure that AI tools work fairly across different demographics.
- Critical Education: Universities should incorporate media literacy and AI ethics into curricula, equipping students with the tools to navigate complex ethical landscapes. Educating students on the potential risks and benefits of AI ensures they can participate responsibly.
Educational institutions must proactively address these risks. Embedding discussions of deepfakes, profiling and AI ethics into curricula can equip students with the critical thinking skills needed to navigate and challenge these technologies responsibly.
By fostering ethical awareness and encouraging informed dialogue, universities can help ensure that these powerful tools are used for innovation rather than harm, aligning education with the broader goal of promoting equity and integrity in an AI-driven world.
Learn how you can join our contributor community.