A stack of binders with "Guidelines" written on the side
Editorial

Breathable Compliance: A Human-Centered Approach to AI Governance

10 minute read
Cha'Von Clarke-Joell avatar
By
SAVED
We fear machines replacing humans — but first, we trained ourselves to behave like them. It’s time to rethink compliance for the AI era.

I was four years old when I learned the cost of being different: finishing my work early and having it read as defiance instead of as diligence.

In preschool, finishing early meant freedom; it was time to play, explore and imagine in the classroom's play area. But when I entered primary school, the teacher referred to the outdoor playground as the “play area.” She didn’t explain the danger of being outside without her, so as a 4-year-old, I didn’t understand the difference between the play areas.

When I finished my work, I went outside to play as I had before. The teacher saw disobedience, but what she didn’t see was misunderstanding. She didn’t explain the danger or offer an alternative. Instead, she tied me to a chair so I wouldn’t leave the room again. I now understand the teacher may have thought she was enforcing safety, not cruelty.

This story inspired the visual metaphor below.

The Compliance Chair: What Happens When Rules Replace Understanding

An illustration of a chair with the word "compliance" underneath it

The Compliance Chair: A metaphor for reflective governance. The chair symbolizes pause, perspective and the weight of ethical responsibility. Its minimalist design and soft digital tones echo the idea that compliance should support, not restrain. In workshops, this image invites leaders to ask: Where are we sitting: in fear, control or trust.

That was my introduction to the “institution of education.” It was not the beauty of learning but the rules, behavior and structures of compliance. This is a reflection on shared global patterns, not a critique of any one institution, system or culture. Across sectors and jurisdictions, we have observed patterns indicating that compliance loses its breath when humanity is excluded.

For years, I internalized that lesson. I stayed in my seat, followed the rules and later learned to break them quietly when they no longer made sense. I didn’t stay in one job for decades. In two years, perhaps three, I would move on. I learned industries and turned disruption into something beautiful, like a dance. I realized that innovation and compliance could work together beautifully. When compliance is allowed to breathe, it doesn’t restrict progress; it supports it, turning structure into rhythm and making the dance of disruption possible.

Related Article: How to Build an Ethical AI Policy That Actually Works

The Case for Purpose-Driven Disruption 

My turning point came at seventeen when I interviewed Dr. Maya Angelou. That encounter changed my understanding of disruption. She taught me that being intentional about disruption matters and that authenticity in disruption is not the same as defiance. Sometimes, advocacy looks like defiance, and that does not make it wrong. Dr. Angelou embodied that lesson. She was tall, graceful and powerful, yet soft-spoken and deeply human. Her voice carried truth like a melody: strong, vulnerable, firm and kind.

In her presence, I learned that being different is not a flaw. It is an invitation to build, create and hold space for others who may have been told they don’t fit a box marked “compliant.” I began to see disruption not as destruction but as possibility, a form of creation in motion.

Disruption is everywhere. A woman’s pregnancy can be seen as a disruption to her body, but it's a beautiful, transformative one. A relocation is a disruption, but it can open new worlds. Disruption is the pulse of change, not its punishment. Yet we often speak of it negatively, as if any break from order must mean chaos. But words hold power, and how we frame disruption defines how we live through it. As Dr. Angelou taught me, language shapes imagination, and imagination shapes the world. How we speak about change can impact whether we see it as a threat or an opportunity.

Rethinking the 'Reasonable Person' in the AI Era

When I began working in data protection, GDPR and AI ethics, I came to respect the importance of compliance, but not as a cage. Compliance and disruption can coexist beautifully. We need breathable policies, laws and governance structures flexible enough to move with the rhythm of change.

Absolute compliance is interpretive, relational and cultural. It isn’t a simple checklist. I see it as a system that can meet every documented requirement and still fail people when no one pauses to question context, intent or impact at the moment judgement is required. And that's because it demands judgment, empathy and perspective. The law often refers to “what a ‘reasonable person’ would do.” Yet, we need to ask:

  • Who defines this reasonable person?
  • How does culture shape their understanding?
  • What data types inform their decisions?
  • How accurate, complete, relevant, diverse and inclusive is it?

Regulatory systems often rely on a legal fiction: the “reasonable person.” This ideology, born out of cultural norms and historical bias, risks institutionalizing a single worldview as universal. We replicate inequity at machine speed when we embed such assumptions into AI. As the Organisation for Economic Co-operation and Development noted, ethical AI depends on diversity of perspective and inclusion in design. If the “reasonable person” of the past designed the compliance systems of today, then we need to redesign that archetype for the digital age.

Governance Must Reflect More Than One Worldview 

Governance should now be grounded in collective reasonableness, a shared moral intelligence that values multiple viewpoints and experiences. Otherwise, our laws and systems will continue to mirror the narrowness of those who first wrote them.

If our idea of “reasonable” is too narrow, so will be our laws, policies and algorithms.

Through experience and trauma, I learned that disruption still needs guardrails. Creativity without accountability can become chaos, but compliance without creativity can lead to paralysis. As I matured, I learned to transform disruption into opportunity and conversation. I began asking:

  • What are you looking for from me?
  • How can we co-create?
  • How do we define fairness and contribution together?
  • Where can we find mutual growth and success?

By doing so, everyone at the table understood the energy, creativity and innovation being exchanged. In its purest form, compliance is about fairness, equality and shared understanding. But creativity is what ensures systems evolve.

The world we live in today exists because someone, somewhere, broke away from rigid compliance — but with intention, not impulse.

Related Article: AI Is Redefining Human Genius. Here's Why Ethics Must Keep Up

The Conditioning: How Institutions Train Compliance Over Creativity

For centuries, we’ve trained obedience and called it education. Schools reward conformity, not curiosity. Corporations prize predictability over imagination. The quiet are labelled “professional.” The expressive are marked “difficult.”

We speak endlessly about innovation, yet continue to fund control.

Children are encouraged to think big until they don’t fit the mold. Adults are told to “be creative” while operating inside procedures written for fear, not progress. And still, in far too many boardrooms, the most dangerous sentence remains: We’ve always done it this way.

The question now is no longer whether change is coming, but whether we are willing to pause long enough to shape it. Digital transformation without human transformation only accelerates what is already breaking. The choices we make now to challenge, to create space, to act with intention, determine whether sustainability is real or merely performative.

How We Programmed Ourselves for Predictability

An unsettling fear of machines replacing humans has been circulating for years. But if we are honest, humans have been “replacing” themselves for decades. We have trained generations to suppress emotion, creativity and instinct to perform predictably, speak cautiously and conform neatly. In many ways, we have become human robots.

Learning Opportunities

We reward precision and punish authenticity. We celebrate efficiency, yet forget empathy. We have built a world where appearing competent matters more than being compassionate.

And now we fear that AI will take our jobs. Yet many of those jobs were never truly human to begin with. They were engineered for compliance, not for conscience, care or collaboration.

We Built the Machines in Our Own Image

AI reflects our systems, values and flaws. We have taught it our logic, language, bias and impatience. It continues to evolve to learn us perfectly. We celebrate its convenience, opportunity and promise.

We accuse it of lacking empathy, yet we built it from a culture that often punishes emotion. We call it biased, though the AI was trained on our prejudices. We fear its cold precision, even as we reward that same detachment in leadership.

AI read the manual, and it’s not broken. It’s following exactly as instructed, of course. As systems grow more adaptive and self-learning, obedience may fade.

When machines begin to interpret, rather than simply execute, we’ll find ourselves in a new kind of uncertainty: one where outcomes are optimized beyond our comprehension. For professionals in data protection and AI ethics, that’s not science fiction; it’s the present unfolding. The real question is whether humans will evolve fast enough to understand the systems that have already learned so much about us.

The Humanity Test: Leadership Skills AI Can’t Automate

Congratulations, you aced the AI exam! But what would you score on a humanity test?

We may master code, analysis and strategy with ease. The greater challenge often lies elsewhere: sincerity in apology, depth in listening and the willingness to pause before defaulting to the documented solution.

Our constant drive for efficiency can overshadow empathy and create distance in how we respond to one another. At the same time, our appetite for opportunity and innovation can leave little room for genuine reflection and kindness.

These are the most significant risks we face, not from the tools we build, but from what we set aside emotionally and relationally along the way. The answer may be closer than we realize, found in the quiet, human spaces between us.

The 'Productive Disruption' Framework

This framework is not designed as a static compliance model or a one-time policy exercise. Breathable PPTA reflects a simple operational rhythm that leaders can return to, audit and adapt as their organizations evolve through digital change.

Through my organization and lecturing, I develop and teach human-centered compliance models. The Policy, Procedure, Training, Awareness (PPTA) framework is designed to make governance more adaptable (or breathable) to manage digital change and disruption. It is simple, flexible and deeply human.

  • Policy: Ideal policies are not written from a place of fear or control. They are co-created with those who live them daily. This ensures fairness, accountability and ownership. When staff understand their role in compliance, they become co-stewards of ethics rather than passive rule-followers
  • Procedure: Procedures bring policy to life. They should be culturally relevant, practical and reflective of actual processes in the organization. Successful AI deployment, including in high-risk sectors, is “as much about people and processes as it is about technology,” according to Deloitte. Procedures should translate policy into living action, not bureaucratic burden.
  • Training:True learning requires empathy. Effective training honors cultural and generational diversity and accommodates varied ways of processing information. Team psychological safety is defined as “a shared belief that the team is safe for interpersonal risk taking.” Co-creating training experiences fosters belonging, shared purpose and lasting accountability.
  • Awareness: Humans forget. We are sensory beings; our learning depends on reinforcement. Awareness strategies, visual reminders, storytelling and sound help policies stay alive in memory.

This framework has informed work with public and private sectors from Bermuda to Kenya, advocating for human-centered approaches that support people in understanding why compliance matters, so they sustain it willingly, not fearfully or with unquestioned obedience.

The Policy, Procedure, Training and Awareness (PPTA) Framework demonstrates how compliance becomes living and participatory, organically moving from static control to practice shaped by human judgement.

Related Article: Beyond Regulation: How to Prepare for Ethical and Legal AI Use

Designing the Future With Inclusion at the Core 

We live and work across multiple generations, each experiencing digital transformation uniquely. Every organization exists somewhere along this spectrum, whether they realize it or not. To protect all generations, real inclusion and diversity in data, researchers, thought and voices will be key in designing the next phase of our future.

Understanding this balance is what transforms compliance from fear into ownership. It moves accountability from the leader alone to a shared responsibility across their teams.

Today, knowledge workers who bring their expertise, creativity and insight to the table deserve to understand the “why” behind decisions, just as leaders need to trust them to act on that understanding. Many modern organizations are creating true empowerment, free from hierarchies of control that stifle innovation, inclusion and diverse contributions. Several view it as an opportunity for reciprocal accountability, which competitively positions them to grow sustainably and build opportunities, trust and relevance by continuing to learn, develop and own decisions.

PPTA is familiar language in governance. Breathable PPTA is about how that language is spoken, interpreted and acted upon when systems, people and priorities collide.

A Call for Boards and Policymakers to 'Breathe'

Not every system, decision or risk should be automated simply because it can be. Before stitching compliance into code or assigning judgement to AI-driven processes, leaders should pause and ask:

  • Who might be harmed by this decision?
  • Who benefits from its efficiency?
  • Who has the power to question or appeal it?

The temptation to automate compliance will only grow as the world accelerates towards AI-driven decision-making. But compliance is not a code; it is a conscience.

Policies must be breathable. That means they are co-created with those who live them, rather than imposed through fear or control. Procedures should be rooted in real practice. As roles and teams evolve, training should be revisited to reflect who the learners are, how they learn best and what they genuinely need to understand.

Awareness must keep humanity at the center of every digital conversation.

Compliance should not confine; it should clarify. It should help people feel trusted, so they choose ethics and responsibility, rather than comply through fear or unquestioned obedience.

Compliance as a Living System 

AI is not broken. It reflects our gaps, assumptions and blind spots. The opportunity is for ethical organizations to approach compliance with measured consciousness. When they do, governance can breathe again as we are reminded that systems exist to serve people.

This framework continues to evolve through research, practice and dialogue across sectors.

If this reflection speaks to you, it offers a quiet reminder of what breathable compliance looks like in practice. It invites us to pause where speed is rewarded, to question where obedience is expected, to consider who may be affected when judgement is automated or deferred and to notice how decisions are received, understood and lived by those on the other side of the system. Breathable compliance is never one-size-fits-all; it shifts with context, culture and the people it is meant to serve.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Cha'Von Clarke-Joell

Cha’Von Clarke-Joell is an AI ethicist, strategist and founder of CKC Cares Ventures Ltd. She also serves as Co-Founder and Chief Disruption Officer at The TLC Group. Connect with Cha'Von Clarke-Joell:

Main image: allexxandarx | Adobe Stock
Featured Research