Balanced stones
Editorial

Tech's Ethical Test: Building AI That's Fair for All

4 minute read
Cha'Von Clarke-Joell avatar
By
SAVED
Is your AI ethically programmed? Understand the stakes of data ethics in our digital world.

As companies race to build cutting-edge AI systems, they often forget to consider the ethical side of how they handle data. This isn't something we can fix later. It's a key factor in whether AI will help or hurt the people it impacts.

I keep seeing organizations pour everything into building advanced algorithms while leaving data governance as something to deal with later. But the way we handle data and whether we do it ethically shapes every decision an AI system makes. When these systems affect who gets access to healthcare, who is approved for loans or how the justice system will treat people, we realize that we can't just treat ethics as an extra "nice to have" feature.

In "Bias in Healthcare AI Systems: A Longitudinal Study," the MIT Media Lab proved what many AI ethicists have been saying: biased medical data creates biased AI systems, making healthcare disparities worse. This is why we need to think differently about data maturity. Yes, of course, technical excellence matters, but it's not enough on its own.

The Problem With Technical Checkboxes

When organizations think about data maturity, they often focus on technical points such as accuracy, completeness, consistency and currency. While these aspects are essential, they overlook a critical element: each data point represents real people with unique lives, aspirations and challenges. Reducing human experiences to mere technical specifications risks creating systems that may appear flawless but ultimately fail the very individuals they aim to serve.

We are experiencing this with entities that claim to be "digitally-focused" or "100% virtual" that do not have enough human oversight. On a recent trip to speak at a conference, I sat with a number of visitors where we could only communicate with the hotel team digitally because they claimed to be fully virtual. Without any other options in the area, we sat frustrated for almost five hours waiting to check in. We finally reached a human being on the phone that drove over to open the doors for us to settle into our rooms.

These experiences prove that many businesses may not be ready to turn fully virtual, and a "bridge" needs to be created that addresses the gap in readiness and potential impacts of technical systems on individuals. Though the data helped them to prepare five rooms, the lack of human oversight led to an upsetting experience for myself and other guests.

The reality is that data that may be technically "clean" can reinforce systemic biases if it doesn't reflect human complexity and cultural differences.

Related Article: What Are Ethicists Saying About AI?

Getting the Basics Right

Effective data management goes beyond organization; it's about responsibility. We must understand the origins of our data and how it evolves to identify potential biases and maintain transparency.

Tracking changes is not merely about documentation; it's about ensuring accountability. These technical tools are effective only when anchored by strong ethical principles.

Real Governance, Not Box-Checking

Data governance has to be more than meeting technical standards or following compliance rules. Of course, those things matter, but they're just the start. Organizations that are getting this right build ethical governance into their work from day one. It is not an afterthought or last-minute addition.

Look at fairness metrics in AI systems. We have tools to measure statistical bias, but are they able to tell us what "fair" really means for different communities? We must always consider ethics and how systems affect real people.

Organizations can put dedicated AI ethics boards in place to oversee development and deployment, ensuring that ethical considerations are prioritized by design. These transparent practices will give stakeholders an opportunity to understand and build trust in decision-making processes that include the use of AI. This is critical in highly-regulated industries.

Space for Different Communities

An often-overlooked aspect of data maturity is the understanding of different cultures. Data systems frequently fail because they do not grasp how various communities perceive and utilize technology. For example, the World Bank's financial inclusion efforts faltered as they overlooked critical informal financial networks due to a lack of understanding of how diverse communities manage their finances.

To build data practices that work for everyone, we will need to:

  • Consider how different communities actually use and think about technology
  • Understand that privacy means different things to different people
  • Think about the history and cultural nuances that affect how data represents communities
  • Collaborate with diverse communities to really understand what they need

Aligning real-world scenarios to discuss cultural nuances in data usage and privacy is a great way to support individuals. For example, some communities have unique approaches to data sharing, and acknowledging these behaviors can lead to more effective AI implementations.

The Rules Are Changing

Privacy laws like GDPR and CCPA set protections, but now we're faced with new questions about AI-generated content and creative oversight. This will change how we may need to think about managing data.

The US Copyright Office recently made this clear by declaring that AI-assisted work needs real human creativity to get copyright protection. It is now official that we need meaningful human input and not reliance on AI agents. How we develop AI-generated content matters. The Office noted, "Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright."  

We're also seeing more focus on checking algorithms for ethical issues, not just technical ones. A key part of the EU's AI Act requirements is ethical consideration. Progressive organizations aren't waiting for new rules. They're already building ethical practices into how they handle data.

Recent Developments Highlighting Ethical Concerns

Recent events further underscore the importance of ethical data practices. The UK's Department for Work and Pensions (DWP) faced scrutiny over its use of AI to process correspondence from benefit claimants. Concerned individuals expressed worries about transparency and the handling of sensitive personal data, emphasizing the need for clear ethical guidelines in AI deployment. 

And, the emergence of AI models like DeepSeek has sparked global debates over national security and data privacy. DeepSeek's data collection practices led to investigations by privacy watchdogs, supporting the importance of ethical data governance in AI development.  

Related Article: Beyond Regulation: How to Prepare for Ethical and Legal AI Use

Learning Opportunities

Moving Forward

Building mature data practices means changing how we think about AI development. Adding more technical tools or processes may not be the answer. Instead, it may be to ensure that strategic, ethical thinking is part of everything we do with data. It is key to understand the importance of ethical responsibility alongside technical excellence.

Having ethically mature data practices will be essential as AI becomes more powerful. It is essential for systems to be built that actually help people rather than cause harm. Organizations will need to take this seriously, putting policies and governance in place that protect their greatest resource: their people.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Cha'Von Clarke-Joell

Cha’Von Clarke-Joell is an AI ethicist, strategist and founder of CKC Cares Ventures Ltd. She also serves as Co-Founder and Chief Disruption Officer at The TLC Group. Connect with Cha'Von Clarke-Joell:

Main image: Leolo212 on Pixabay
Featured Research