A group of buildings on a university campus.
Editorial

Ensuring No One in Education Gets Left Behind by AI

4 minute read
Emily Barnes avatar
By
SAVED
How can higher education make AI more accessible?
I recently had the opportunity to oversee the planning and implementation of enterprise AI at a higher education institution. Both personally and professionally, especially as a chief AI officer, I was thrilled to take on this project. Few institutions are investing in this level of transformation, so it was an opportunity to lead a significant innovation. The primary motivation for this investment was to enhance operations in enrollment and improve the student experience throughout the enrollment process, including transcript evaluation and the expansion of services. While these systems are innovative and extremely beneficial for business — and indeed greatly needed in higher education — they also risk leaving behind those without adequate resources or access. This experience revealed a critical truth: optimizing AI infrastructure and capacity must prioritize ethical considerations to ensure equitable access, actively prevent biases in AI systems and guarantee these technologies preserve the rights of a diverse and inclusive range of individuals and communities.

Equitable Access in AI Deployment

We must ensure AI technologies are inclusive and extend across diverse populations. In the context of higher education, access to AI-driven tools can dramatically impact student success and institutional efficiency. For example, AI-driven enrollment systems can streamline administrative tasks, significantly reducing processing times by up to 50%, which enables staff to focus on more personalized student support. AI-powered advising systems can analyze student data to identify those at risk of dropping out and provide timely interventions, potentially increasing retention rates by up to 15%.

Historically, technological advancements have disproportionately favored those with access to resources, exacerbating existing inequalities. AI investments and benefits are predominantly concentrated in high-income countries, leaving low- and middle-income nations behind, according to a 2020 study by the Stanford Institute for Human-Centered AI. For instance, while North America and Europe received 75% of global AI funding, Africa and Latin America combined accounted for less than 5% of such investments, the Stanford study shows. Addressing this disproportion requires developing an AI infrastructure that is accessible to underserved communities.

In the United States, rural areas often lack high-speed internet access, which is an assumed norm in the realm of AI implementation, but it’s an impassable barrier for many community organizations, including workforce development and educational institutions. These organizations directly impact the development of human capital and can bridge socio-economic gaps. Yet, they’re still unable to leverage advancing technologies. Approximately 14.5 million Americans in rural areas still lack access to broadband at the FCC's benchmark speeds, severely limiting their ability to benefit from AI-driven educational solutions, according to a 2021 FCC report. Collaborative efforts between governments, private sector entities and nonprofit organizations can significantly move the needle. For example, initiatives like Google's AI for Social Good have leveraged AI to improve reading fluency among children in India, which greatly impacts an individual’s educational outcomes and future opportunities.

Actively Preventing Bias in AI Systems

AI systems in higher education are only as good as the data they are trained on. If these data sets are biased, the resulting AI applications will perpetuate and even amplify these biases. For example, AI-driven admissions processes and automated grading systems have inadvertently favored certain groups of students over others, perpetuating existing inequalities. Avoiding bias in an AI system requires a multi-faceted approach to diversifying training data sets to better represent the full spectrum of human diversity that enrolls in higher education. Universities can partner with organizations that advocate for underrepresented groups to gather more inclusive data, ensuring that AI tools, like automated grading systems and personalized learning platforms, work fairly for all students.

One effective way to reduce the risk of biased outcomes is through transparency in AI development. This increasingly critical issue is poised to garner significant attention from higher education leaders as accrediting bodies begin to address the challenges of effectively governing AI within their institutions. Institutions should strive to provide open-source data sets and algorithms, enabling broader scrutiny and accountability. For example, OpenAI has made its code and data sets publicly available, inviting researchers and developers to critique and improve their work. This collaborative approach not only enhances the robustness of AI systems, but also fosters trust among users. Ultimately, making a concerted effort to prevent bias in AI systems will promote fairness and justice in their applications.

Governance of AI and Preserving Human Rights

The optimization of AI infrastructure and capacity in higher education must be accompanied by robust ethical frameworks that can prevent misuse and ensure that AI technologies align with societal values and human rights. The European Union's General Data Protection Regulation (GDPR) and the EU’s recent “AI Act” serve as benchmarks for creating comprehensive regulatory frameworks that address privacy, accountability and transparency in AI applications. These regulations protect individual rights and set standards for data handling, consent and security.

Fostering ethical responsibility involves training AI professionals, faculty, students and consumers of all kinds in ethics. This can include encouraging ethical deliberation and establishing oversight, such as ethics review boards. Companies like Microsoft have established AI ethics committees to oversee the ethical implications of their AI projects, setting a precedent for responsible AI governance. Universities are implementing committees solely focused on the ethical use of AI technologies and fair and equitable methods to incorporate AI into curriculum. These committees should be in place across industries to evaluate AI initiatives for potential ethical issues, guide the development of fair and transparent algorithms and ensure the deployment of AI technologies does not infringe on human rights. Additionally, integrating ethical training into the professional development of AI engineers and data scientists ensures ethical considerations are ingrained in their approach to technology design and implementation.

In Conclusion

Optimizing AI infrastructure and capacity is not merely a technical challenge. It is an ethical imperative that demands a commitment to equity, bias prevention and ethical governance. By prioritizing these considerations, stakeholders in higher education can ensure AI technologies are developed and deployed in ways that benefit all people, rather than exacerbating existing inequalities. As AI continues to evolve, the integration of ethical principles into its optimization will be crucial in shaping a future where technology serves as a force for good, advancing societal progress and equity. The commitment to ethical AI ensures AI advancements are aligned with the values of fairness, justice and respect for human dignity, paving the way for a more inclusive and equitable digital future.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Emily Barnes

Dr. Emily Barnes is a leader and researcher with over 15 years in higher education who's focused on using technology, AI and ML to innovate education and support women in STEM and leadership, imparting her expertise by teaching and developing related curricula. Her academic research and operational strategies are informed by her educational background: a Ph.D. in artificial intelligence from Capitol Technology University, an Ed.D. in higher education administration from Maryville University, an M.L.I.S. from Indiana University Indianapolis and a B.A. in humanities and philosophy from Indiana University. Connect with Emily Barnes:

Main image: By Peter Robbins.
Featured Research