A student walks on a path beside buildings on a college campus.
Editorial

Practicing AI Ethics in Education

5 minute read
Nick Jackson avatar
By
SAVED
How can educators approach AI-based learning in ethical ways?

As we continue to explore the transformative potential of generative AI in education, it's crucial to address the ethical considerations that accompany this powerful technology.

Much like any tool, AI can be used for good or ill and many gray areas in-between. There is little doubt in those who understand the disruptive influence of AI that the choices we make today could shape the educational landscape for years to come.

But what exactly do we mean by ethics in the context of generative AI?

As with any other consideration of ethics, this involves both universal principles and personal perspectives. Objective ethics suggests that certain moral rules, like fairness and harm reduction, apply to everyone and can guide AI development. Subjective ethics, however, emphasizes that ethical decisions can vary based on individual or institutional beliefs and cultural contexts. Combining these views helps us evaluate AI by considering both overarching ethical standards and the specific contexts in which the technology is used. This balanced approach ensures we address both the universal and diverse impacts of AI.

Hence, it is vital that those in education considering using AI thoughtfully navigate the potential benefits and risks of this technology, ensuring its use aligns with core values of fairness, equity and respect. Yet, these issues need further unpacking to appreciate the effects on students.

Real-World AI Ethics: Preparing Students for an Evolving Landscape

It is possible to provide a simple explanation for generative AI: large models trained on extremely large amounts of existing data that can generate text, images or other media based on the content they have in the model. However, that does not tell the full story of how it actually functions. Here's why this deeper knowledge matters:

  • Beyond the magic: Generative AI might seem like magic, but with core understanding, teachers can help students move past marveling at the results and toward critical analysis of the output.
  • Impurities in the training data: Knowing how the models are trained and the sources of data reveal inherent limitations and potential for bias. Knowing this leads educators to guide students to treating output with skepticism.
  • The hallucinations and vagaries: AI's strengths can be leveraged while mitigating its weaknesses with awareness of its behavioral tendencies, knowing the significance of clear prompts and how to keep it from veering off topic.

Generative AI in the Classroom: Potential for Teaching

Generative AI isn't just an educational tool, far from it. The reach of this technology in various guises in reshaping industries, processes and workflows is being realized in countries throughout the world. Arguably, this leads to a re-definition of the skills students will need to thrive in contemporary workforces. Yet, such issues raise critical, ethical questions for educators:

  • Future-proofing skills: With AI automating routine tasks, how can education systems ensure students develop invaluable critical thinking, creativity and complex problem-solving skills?
  • Bridging the digital divide: As AI becomes more pervasive and reliance on the technology increases, the potential for unequal access to this technology needs to be addressed. How can we ensure that all students, regardless of their socioeconomic background or geographical location have the opportunity to learn about and be assisted by AI?
  • Data privacy: The development and use of generative AI technology is expensive. Furthermore, it relies on data, vast amounts of data. Given these factors and as can be seen currently, major companies, with commercial drives, often sit behind the large language models (LLMs) on which much of the tools are based. Thus, issues of data sharing and privacy are a concern. How can we protect vital elements of student data that need to be kept private?
  • Algorithmic bias: The vast data sets used to train generative AI models can inadvertently contain societal biases. These biases can then be reflected and even amplified in output, leading to discriminatory or unfair results. How can education recognize where and how these biases occur and educate students on them?
  • Truth and integrity in the digital age: AI can generate convincing misinformation, hallucinate and misunderstand nuances with ease. It is vital that there is an appreciation that the tools do not have conscience or social cues. What is needed in education systems to focus on teaching students how to evaluate sources critically, discern fact from fiction and see value in the responsible use of AI?
  • Respecting creative works: Questions and debates continue into the ethics around use of existing materials in data sets that are used to train LLMs. Similarly, issues around copyright and originality arise with AI-generated work from students. How can educators ensure that their use and students' use of AI respect intellectual property rights and encourage original thought and creation?

Student Voice Matters: Empowering Ethical Decision Makers

One of the biggest mistakes that can be made and one which has been made many times before is to see students as passive bystanders in the decision-making around AI. The reality is that this technology is pervading many aspects of their lives. Students are sometimes exposed to generative AI tools in diverse platforms, such as on social media. This may influence how they view this technology and the ways they use it. Students should be provided with a voice in shaping how this technology is used within their educational journey and beyond. This can be done through:

  • Open and honest dialogue: Fostering open discussions about AI ethics where educators actively create safe spaces for students to share their concerns, hopes and ideas. This dialogue can lead to a more nuanced understanding of the technology and its potential impact for both educators and students. It also promotes a culture of co-learning and co-development.
  • Student-led initiatives: Empowering students to create their own guidelines and provide examples to educators and their peers on their use of AI fosters a sense of ownership and agency. When students are involved in shaping ethical standards, they're more likely to adhere to them.
  • Fostering ethical AI literacy through active engagement: Involving students in hands-on activities like analyzing real-world case studies, participating in ethics simulations, conducting research, using AI in creative problem-solving activities and expressing their views through creative mediums can be used to foster ethical AI literacy. These experiences empower students to engage critically with AI ethics, equipping them to better discern bias, evaluate AI-generated content and advocate for fair and responsible use of the technology. The aim is to develop students' awareness and understanding of the need to become responsible creators and users of this transformative technology.
  • Understanding data and surveillance: AI systems collect vast amounts of student data. Educators need to involve students in discussions about data and teach how personal data is used, why it's important and ways to protect privacy.

In Conclusion

As those involved navigate the integration of generative AI in education, it is imperative to address the ethical implications comprehensively. Balancing universal ethical principles with individual and cultural contexts allows for a nuanced approach to AI use in classrooms. Educators must thoughtfully consider the benefits and risks, ensuring AI is used in ways that uphold fairness, equity and respect. Engaging students in this ethical discourse empowers them to become informed and responsible users and creators, ready to tackle the challenges and harness the opportunities presented by this technology. By fostering critical thinking, protecting data privacy and addressing biases, students can be prepared for a future where AI plays a pivotal role, while understanding the significance of ethical integrity.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Nick Jackson

Nick Jackson is the leader of digital technologies at Scotch College in Adelaide, Australia and founder of Now Future Learning, providing help to educational institutions and businesses on the integration and use of generative AI. Jackson is also the co-author of the book “The Next Word: AI & Teachers.” He holds a Ph.D. and two master's-level degrees. Connect with Nick Jackson:

Main image: By Victoria Heath.
Featured Research