Nick Jackson — one of the 2024 VKTR Contributors of the Year.
Interview

Embracing the AI Era in Education

11 minute read
Chris Ehrlich avatar
By
SAVED
How should educators view AI?

Editor's note: This article is part of our series that celebrates the 2024 VKTR Contributors of the Year.

Nick Jackson, the leader of digital technologies at Scotch College Adelaide, is a hands-on voice on AI in education. As one of the 2024 VKTR Contributors of the Year, Jackson regularly covers AI implementation, upskilling, ethics and more. In this Q&A, Jackson discusses how educational institutions can approach AI for both instruction and learning.

Table of Contents

About Nick Jackson

  • Leader of digital technologies at Scotch College Adelaide
  • Founder of Now Future Learning
  • Co-author of the book "The Next Word Now: AI & Teachers"
  • Holds an M.A. in education from Edge Hill University, an M.Sc. in multimedia and e-learning from Huddersfield University and a Ph.d. in the philosophy of education from Monash University

Why are you personally interested in AI?

I've always had an interest in technology of all kinds. Since being a child when game consoles, such as ZX Spectrum, came about, I was one of those kids who was buying and playing those games, like Manic Miner and Paper Boy. But mostly, my interest has always been in using technology, experimenting and playing with computers, finding out what they do. I would not really say I am much of a programmer. It's not something that I choose to do on a regular basis, but I have always had this interest in how technology works with humans, how it affects us and how it infuses in our lives.

So prior to generative AI launching itself on us, I was teaching students about AI, mostly in terms of autonomous vehicles and rule-based chatbots, looking at the idea of voice control and how AI could manipulate workflows and automation and robots that could come together to affect certain occupations or certain vocations. I was already fascinated with the idea that AI could be some sort of synthetic or simulated intelligence that humans worked with. I was trying to make students aware of that, and I was intrigued to see how that could pan out. But of course, that's nothing compared to what we've seen in the sphere of generative AI. So it comes from that deep-rooted interest in technology and people and how those two things come together and how this creates both issues and advantages and disadvantages and the complexities within all this, alongside an interest in the technology itself and what it does.

What are your general activities in AI?

They are multi-faceted but all are rooted in education. I'm deeply interested in how generative AI can be used to improve education and how it disrupts education. I'm interested in how teachers and students can use this technology, how it affects their practice, their learning. Therefore, I invest time in helping teachers and students work with AI to realize its potential, its power and its reach. I do presentations, deliver workshops, develop training and education materials.

I also write books, articles, blog posts and social media commentary on platforms, such as LinkedIn, where I am a voice of provocation to some degree and a voice of discussion. I use these platforms to synthesize a lot of the information and thinking that is out there on generative AI, trying to give perspectives on how I see this technology affecting education. I try to give people information and viewpoints but also ideas to move forward into better technology integration, into improving and changing education systems and right down to the grassroots level of practice.

Similarly, I focus on the learners' perspectives, because my philosophy is based on student agency and a belief that students should take a more significant role in determining their own pathways in education. So my general activities in AI center on the ecosystem around integrating AI into education, either being a spokesperson, working online or face-to-face but also working directly in a school on a day-to-day basis, across all year levels.

Bracing for AI Changing Education and Society

What’s your primary philosophy on AI in education?

My primary philosophy on AI in education is that this is the biggest disruption we have seen, definitely in my lifetime, maybe since formal education began. It has the potential to significantly change the way we teach, the way people learn and empower all sorts of different groups within education in terms of what they can do and their approaches to learning. I also believe it will go much broader than education itself and into the various aspects of our lives — for example, in areas of well-being. Hence, the knock-on effects of living in an AI-infused world will affect human beings and education in ways we have not even begun to consider at this stage. I believe what we are currently witnessing is the reality that, in many respects, most education systems currently in place are not fit for purpose: elements such as current assessment methods and systems and structures we have in place, in terms of schools, AI is disrupting much of that.

We have to ask deep questions about what we value in education, what we are willing to change or allow to disappear completely, where AI needs to be infused and where AI needs to be resisted for the sake of core skills/knowledge human beings need to develop. We need to consider where we should be working with or without AI in a lot of different ways. That requires societies around the world to ask very deep, philosophical questions. And they need to be asked quickly before we get ourselves into a situation like we have with social media and mobile phones with young people, where we are struggling to cope with the sort of ways they are affecting their lives. We cannot succumb to the use of archaic methods, such as banning and trying to backtrack on current ways of working, as we are seeing currently, because we have allowed AI to be used and sold to young people without trying to understand its impact.

Having said this, I'm very much a positivist. I believe this technology can really be a catalyst for change and that change was needed long before AI came along. My philosophy is that we need to be on the front foot, be clear on what we are doing, why we are doing it, who should be involved and ensure we include students and young people as well as educators in decision-making.

Related Article: Navigating the New Landscape of Generative AI in Education

Enabling Teaching and Learning With AI

Which educational functions, processes and workflows should AI be improving? How could AI technology improve education?

AI has the potential to enhance a wide range of educational functions and processes. In teaching and learning, AI should be being deployed to enable more learner-centred and adaptive instruction, looking at effective ways to provide automated or semi-automated assessment and real-time feedback but also how and when teachers play a key role in both these areas. It should be streamlining administrative tasks, such as enrollment, timetabling, attendance tracking and communication with the community.

For educators, AI should be used to support professional development by curating resources and tracking and analyzing performance, bespoking professional development and training. AI tools should be utilized to improve curriculum design through content creation, mapping and identifying gaps.

In terms of student well-being, research and experimentation should be going on to see how AI can monitor mental health, support inclusive education and provide career guidance. Additionally, generative AI demonstrates continually how effective it is with data. Therefore, the technology should be used to enhance data management and security and in data analysis to forecast and offer insights on data modelling, making schools more efficient and responsive to student needs.

Related Article: Beyond the Textbook: AI’s Overhaul of Teaching and Learning

Accounting for AI Issues in Education

How could AI adversely impact education?

AI could adversely impact education in several ways if it is not thoughtfully implemented and research is not carried out. There are dangers the use of generative AI can generally add to the digital addictions and disconnects some young people are struggling with currently in respect to mobile phones and social media, which can only hinder the abilities of young people to be successful learners. There are also risks AI can be used to significantly reduce the need for young people to critically think and problem solve. There is little doubt biases in AI algorithms can perpetuate inequities. These can lead to all manner of issues developing in education — for example, unfair outcomes in assessment, feedback or misrepresentations in resources created.

With increased reliance on technology and the sophistication with which humans can and will engage with AI tools, there may be a loss of human connection in learning environments. This may negatively affect student engagement, social development and emotional well-being. It may also create disconnects between teachers and students and challenge the building and sustaining of relationships that are fundamental in learning environments.

Additionally, data privacy and security risks arise when sensitive student information is collected and stored by AI systems. Overuse or misuse of AI for automated grading may overlook nuances in student work that require human judgment. Finally, access gaps due to the unequal availability of AI technologies may widen the digital divide, disadvantaging schools and students in certain socioeconomic situations.

Related Article: Practicing AI Ethics in Education

Using AI for Human-Centered Growth

What do you believe is the relationship between an educator, student and AI?

I believe the relationship between an educator, student and AI should be a collaborative partnership where each plays a series of distinct yet interconnected roles. Educators will increasingly take on the role of guides and mentors, using AI to enhance their teaching and differentiating instruction, offering a layer of nuanced human interaction and understanding of social situations, the learner and the environments they are working in. Educators should be gaining insights into student performance while providing the human connection and ethical oversight that AI cannot and should not be allowed to replicate.

Students should be active learners who engage with AI for personalized and customized learning materials, immediate formative feedback to assist with development and independent exploration, with educators helping them reflect and grow beyond AI's capabilities and in areas AI is not relevant. Conversely, students need to be seeking their own ways of disconnecting with technologies, including AI, and understanding the significance of nature and of human relationships.

AI should primarily serve as an assistant, augmenting teaching and learning through adaptive resources, automated tasks and data-driven insights. AI technologies should be used to create a balanced and enriched learning environment, offering enhanced efficiency and customization, but the essential human elements of empathy, creativity and thinking need to remain central in education.

Related Article: Human Collaboration Still Matters in the Age of Artificial Intelligence

Spreading AI Institution-Wide

What should an educational institution do to support AI adoption for teachers, staff and students?

First and foremost, institutions need to be proactive and supportive of AI adoption by teachers, staff and students. To support AI adoption, educational institutions should provide ongoing professional development to build confidence and competence in using AI tools, alongside clear policies on responsible use, ethics and data privacy. There needs to be consideration for equitable access to AI resources, infrastructure demands and technical support. Integrating AI literacy into the curriculum will help students understand AI's practical applications and ethical implications.

Ongoing experimentation, research and flexibility are essential. Institutions should run pilot programs, gather feedback and refine strategies, while maintaining a culture of innovation through supportive leadership. Ethical considerations, such as addressing bias and promoting fairness, must remain a priority.

Engaging students, parents and the wider community through transparent communication and information sessions will foster trust, while robust data security measures protect privacy. Continuous evaluation and adaptation of AI practices will ensure AI adoption meets evolving educational needs effectively and responsibly. Having a sustained approach and working in all areas described should lead to appropriate cultures being fostered.

Related Article: AI Minimal Use Policies Prepare Students for an AI Future

Learning From Early AI Adopters in Education

What are educational institutions doing well in AI adoptions?

Unfortunately, there are not many educational institutions actually doing many things well in AI adoptions. Bans still exist in some areas or limited access through firewalls. The few that are proactive in AI adoption are focusing largely on exploring how they can enhance teaching and learning. They are integrating AI tools that adapt to individual student needs, such as custom chatbots or using general-purpose chatbots in defined ways. There are some examples of tools for providing tailored content and real-time feedback.

Some institutions are beginning to implement AI-driven learning analytics to identify learning gaps, predict student outcomes and inform data-driven decision-making, but this seems to be in its infancy. There are some good examples of where staff and sometimes students are being provided with professional learning to support upskilling and confidence building in the use of generative AI in education. Likewise, some institutions are fostering ethical awareness by developing guidelines for responsible AI use and addressing concerns around privacy, bias and fairness.

Related Article: 5 AI Case Studies in Education

Forming Multi-Disciplinary Teams for AI in Education

What sort of talent should educational organizations be looking for to support AI development, adoption and usage?

The demands that successful and sustained generative AI integration puts on educational organizations means they need a diverse range of talent. Ideally, the list of support at various levels includes AI developers and expertise in data science/data analytics to help develop bespoke and highly contextualized solutions for the organizations deployed in. Some of this work may involve employing machine learning (ML) specialists to refine AI models.

There is a need for teachers with AI skills and confidence as well as wider digital literacy skills and educators who can act as coaches to assist in integrating AI into pedagogy practice, creating meaningful and long-term professional development. IT specialists and cybersecurity experts are crucial for maintaining infrastructure and protecting student and organizational data. Curriculum designers and instructional technologists are needed to help create AI-enhanced learning experiences, while ethics specialists can ensure AI is deployed and used responsibly and equitably in organizations. Change managers and project managers can facilitate smooth AI adoption. Finally, educational researchers are vital to evaluate AI's impact on teaching and learning, supporting continuous improvement.

Learning Opportunities

Related Article: How Companies Can Prepare for an AI-Augmented Workforce

Looking to More AI Features in Education

What do you see as the growth opportunities in AI for education in the next year and beyond?

It is clear recent developments in more agentic AI models — where tools are showing they can understand more about the world around a user, facilitating interaction with various elements in different environments — offer many possibilities for education. Features such as voice interaction with chatbots and generative AI tools that access the internet, offer output in different formats and integrate with existing, universally used tools all appear to be set for growth in the next year. There seems to be a trend for more diverse applications of generative AI in bespoke areas, and I can see that continuing in a similar way as the Web 2.0 startup tech culture spreads.

Jackson’s Interests Outside of AI

What do you like to do outside of work?

I have a keen interest in music. I like to play around with developing my own electronic music and practicing DJ skills. However, I also go to see a considerable amount of live music, bands and DJs. A lot of my free time is spent around sport. I am a lifelong Leeds United supporter, and although I am on the other side of the world, I watch every game, sometimes at ridiculous times of the night. I like to don the lycra and cycle in the warm climate in Adelaide and regularly visit the gym. This allows me to enjoy eating and drinking. I also enjoy reading, travelling and being with family and friends. Lastly, I like walking the dog on the beach, listening to and watching the waves.

Check out some of Jackson's articles from 2024:

About the Author
Chris Ehrlich

Chris Ehrlich is the former editor in chief and a co-founder of VKTR. He's an award-winning journalist with over 20 years in content, covering AI, business and B2B technologies. His versatile reporting has appeared in over 20 media outlets. He's an author and holds a B.A. in English and political science from Denison University. Connect with Chris Ehrlich:

Main image: By Victoria Mathis.
Featured Research