A security camera hangs on the exterior of a building.
Feature

What Are Ethicists Saying About AI?

4 minute read
Solon Teal avatar
By
SAVED
Where does ethics fit into AI innovation?

As AI development and proliferation accelerate, important questions around ethics and morality are mainstream. Key issues such as the future of work, economic stability and personal freedoms are central concerns, contrasting sharply with more philosophical inquiries, such as, “Is AI alive?”

This article will explore the critical trends and developments in the ethical support, critique and educational implementation of AI, providing insights into how these technologies are shaping our society and the ethical frameworks that guide them.

Techno-Optimists: AI is Ethical, Because It Improves People's Conditions

One of the more powerful narrative arguments for AI development today is “Techno-Optimism.” Popularized by Marc Andreesen’s “Techno-Optimist Manifesto,” techno-optimists see the development of AI as an ethical imperative: “We believe growth is progress — leading to vitality, expansion of life, increasing knowledge, higher well-being,” Andreesen says.

AI can and will, as the argument goes, resolve issues global humans face today. After all, if all possible proteins have been found with AI, one day AI may also help cure cancer and more. One of techno-optimsm’s greatest strengths is its ability to use the precedent of previous intellectual discoveries as prima facie of support for its own AI predictions.

Effective Altruist and Effective Accelerationist Perspectives Continue to Fuel AI Ethics Discussions

There are various techno-optimism positions. Effective altruists became mainstream as the core of FTX’s Sam Bankman Fried’s belief systems and hold that innovation should only happen for the good of humanity. Their contrarian polarity, the effective accelerationists, have the position that the ethical good is “innovation without limits.” Importantly, these groups are not religions but are loosely organized, primarily online and on specific platforms and sites. This small number of thinkers, however, has not prevented these ethical perspectives from significantly influencing reality and thought:

Critiques of techno-optimists often emphasize the oversimplification of advances within the techno-optimist perspective. For example, societies do not have consistent ethical priorities. However, this discourse — especially online — is often the default.

AI Bias Analyses Suggest Power, Not Ethics Should be a Key Discussion

Bias has been a foundational critique of modern applied AI for years. For example, Joy Buolamwini’s pre-chat GPT investigative research highlights significant accuracy disparities in facial recognition between races and spurred expectations of bias in future AI adoption.

There are many other examples of output bias — where decisions made by an AI were foundationally broken, with the most recent being Google’s recent problems in which its image generation overcorrected to mitigate bias. XAI’s development of Grok as an explicitly biased “anti-woke” chatbot was in response to the implicit political bias Elon Musk perceived in ChatGPT. “Bias” will likely be the “fake news,” whether true or not, of AI systems in the future.

Data is a foundational determinant of potential bias. Frontier models are increasingly secretive about their data sources, often personal information, to train their models. The legality, if not ethics, of these training data sets is currently going through discovery due to multiple separate lawsuits. However, some known data sources include “The Pile” and the heavily used LAION-5B data set, the largest image-text data set in the world.

See more: What Are Philosophers Saying About AI?

LAION-5B and AI Powers vs. AI Ethics

The Knowing Machines Research Project released a visual story of the open LAION-5B data set. The analysis revealed concerns around data integrity — such as being overly reliant on a small number of North American raters. The research also highlighted a broader story. “Omissions and biases and blind spots from these stacked-up models and training sets shape all of the resulting new models and new training sets.” Small acts in one part of the process can have significant, unforeseen downstream ramifications.

Professor Kate Crawford of USC, co-founder of the Knowing Machines Project and a “Time100 AI” influential person recently stated she sees a critical opportunity to move “beyond discussion of ethics to discussions of power.” After all, systems that have the appearance of thought now exist and not only as an ethical thought experiment. If more intelligent AI arises, who has the power to determine cultural ethics?

Consistent AI Educational Ethics Are Works-In-Progress

Unlike companies, which have clear and specific deliverables, education is inherently process-oriented, meaning that the methods often dictate the outcomes. For instance, students who frequently use AI may lose their ability to write effectively without it. Conversely, those who do not engage with AI may find themselves at a disadvantage in the future job market. Balancing the practical needs of students' future careers with the principles of educational pedagogy is central to ongoing ethical discussions in academic settings.

  • Emory and Brown universities are making AI ethics a key focus of research in their hiring and emphasis (link)
  • Is it ethical for teachers to use AI as a tool for them to save time on grading? (link)
  • Decisions on AI usage are generally occurring on a teacher-by-teacher basis (link)
  • Teaching style, gender and background often determine a teacher’s decisions to use AI in their classrooms (link)
  • The late Daniel Dennet of Tufts commented last year on how one of his greatest concerns regarding AI is humanity’s loss of mutual trust (link)

Overall, ethical applications of AI consistently appear to be on a case-by-case basis and oriented around a practical and balanced engagement of tools. Optimism and contextual knowledge of bias provide a strong basis for educational usage.

Ethical Models and AI Models Will Continue to Evolve

Ethical debates surrounding AI, historically rooted in theory, are daily confronting more tangible issues — from Google cloud contracts to insurance claims. As the implications of AI stretch beyond theoretical benefits, such as solving humanity's maladies and bringing about utopias, they increasingly influence practical and corporate realities.

The adage "might makes right" serves as a cautionary backdrop in these discussions. If AI, or its controllers, hold disproportionate power, ethical debates risk becoming superficial. Thus, ethical considerations must be dynamically integrated into AI development and application. Critics and cheerleaders alike must continually revise their ethical frameworks to keep pace with technological advancements, so principled action guides AI's evolution.

See more: How Might Socrates Have Used AI Chatbots?

About the Author
Solon Teal

Solon Teal is a product operations executive with a dynamic career spanning venture capitalism, startup innovation and design. He's a seasoned operator, serial entrepreneur, consultant on digital well-being for teenagers and an AI researcher, focusing on tool metacognition and practical theory. Teal began his career at Google, working cross functionally and cross vertically, and has worked with companies from inception to growth stage. He holds an M.B.A. and M.S. in design innovation and strategy from the Northwestern University Kellogg School of Management and a B.A. in history and government from Claremont McKenna College. Connect with Solon Teal:

Main image: By Possessed Photography.
Featured Research