The Gist
- Using AI demands an ethical framework to boost trust in both customers and employees. If customers are going to trust your brand and employees are going to trust that you have their interests in mind, your organization needs a transparent AI framework. This should come from the top.
- Generative AI should be a tool to create content, not the content itself. While generative AI can be used to create a great deal of content, organizations run the risk of eroding trust or fatiguing customers with subpar quality content if used too much.
- North America lags behind Europe in addressing ethical concerns around AI. The EU is moving ahead with AI regulation, which remains more fragmented in the U.S.
BARCELONA — Several years ago, there was a wave of market disruption as many organizations moved their tech stacks from being on-premise to the cloud.
Now, there is similar disruption taking place in the AI space. AI’s disruptive shift is poised to have a great or greater of an impact as moving to the cloud once did.
It’s also easier for organizations to make the move to AI. “If I’m already in the cloud, then bringing in an AI component isn’t that big of a step,” Bertrand Maugain, CEO at Ibexa in an interview with CMSWire during his company's Ibexa Summit conference in Barcelona last month.
One of the main advantages to AI is how it’s making technology easier to access for non-technical people. In addition, there is a bottleneck on experimentation that AI is eliminating. Projects that used to take weeks or months to prototype can now take days.
“AI allows us to get more value from unstructured or messy days,” Scott Brinker, editor of Chief Martec, said in an interview with CMSWire.
CMSWire heard the tales of AI from the EU perspective at last month’s Ibexa Summit. We learned from dozens of conversations with EU digital customer experience practitioners and thought leaders that while AI is a useful tool that organizations need to explore, its application needs to be done carefully and thoughtfully to maintain customer trust.
Table of Contents
- Using AI While Maintaining Customer Trust
- Distinguish Generative and Non-Generative AI Use Cases
- Regulation Efforts Beyond GDPR
- Authenticity for Customers and Employees Alike
- How to Develop a Culture of AI Ethics and Trust
- Conclusion: Nuanced, Privacy-First Approach to AI Implementation
- Core Questions Around AI Ethics and Trust
Using AI While Maintaining Customer Trust
But how can one develop a sense of trust in how AI tools are used? Panel members at the 2025 Ibexa Summit AI roundtable discussion “How to Balance Productivity and Authenticity?” tackled this question and more. These experts — Brinker, Margaret Ann Dowling, founder of Create and Translate.org, Bente Sollid, CEO of Digital Hverdag, and Janus Boye, lead at Boye & Company — offered the following solutions:
- Transparency: Be open and transparent about how your AI system works, where it draws data from and its decision-making process.
- Accountability: Establish clear lines of responsibility for the AI system.
- Ethical guidelines: Establish clear guidelines around AI usage, particularly around privacy and safety.
- User control: Give control over the ability to override AI decisions where appropriate.
- Monitoring: Establish ongoing monitoring benchmarks for the AI system’s performance and impact.
Efforts to establish this and other frameworks around AI are underway in North America. Among respondents of the most recent CMSWire State of Digital Customer Experience survey, 29% of respondent organizations don’t yet have a framework in place but plan to implement one. While this number is nearly half what it was in 2023, there’s still more work to do. Following the steps listed above is a good place to start.
Related Article: Why Transparency Is Vital When Brands Use AI
Distinguish Generative and Non-Generative AI Use Cases
While AI is a relatively broad term, organizations should take care to differentiate between generative and non-generative AI use cases. Thinking of AI as a way to reduce repetitive tasks allows us to view it as a tool workers can use to improve their lives.
Generative AI, however, isn’t as neat of a package. While generative AI can be used to reduce production times, there’s a very real danger in striving for quantity over quality — particularly in the more creative fields. Generative AI is fine for content production when the output is relatively non-creative — product and image descriptions for example.
But EU panel members are saying it now: true, highly creative content can and should come from humans.
There’s also the human consumption element to consider. How will consumers and potential customers feel about efforts to get their attention with generative AI?
In her session at the 2025 Ibexa Summit entitled "Trust, Authenticity & the Future of Content in the Age of AI," Dowling discussed this very thing. “We underestimate the emotional aspect that content is for the consumer,” said Dowling. Too much artificially generated content can lead to alienation.
Regulation Efforts Beyond GDPR
Although the GDPR data privacy law in the EU is still a relatively young law — having come into effect less than seven years ago — it’s become part of the European work culture. Compliance and regulation are on top of everyone’s minds, as the EU seeks to address the rise of AI usage across the workforce.
Late in 2024, the Artificial Intelligence Act came into force across the EU, establishing a comprehensive legal framework for AI. While similar laws have been proposed in the United States, regulation remains fragmented, with Colorado and California leading the way in state-level efforts.
Related Article: European Artificial Intelligence Act Comes Into Force
Authenticity for Customers and Employees Alike
Authenticity around AI affects both customers and employees. On the customer side, they want to know that the information they’re consuming is real and trustworthy. On the employee side, authenticity means the practice of creating that reflects the true self. This pertains especially to content creators, who want to feel emotionally invested in their creative journey. Using generative AI to produce large volumes of similar-sounding content defeats the purpose of AI being used to eliminate or reduce repetitive tasks.
If your organization is committed to using generative AI, you might be better off using personalized GPT models. A recent study showed that output using a personalized LLM — which took into account the authors’ unique tone, voice and style — felt more authentic than copy produced without any GPT personalization.
Editor's note: Check out this video interview from the Ibexa Summit, done by Felipe Jaramillio of Aplyca with Scott Brinker, editor of Chief Martec.
How to Develop a Culture of AI Ethics and Trust
According to CMSWire research, the desire to adopt generative AI is happening at both the C-suite and the line level. In the report Unlocking the Potential of Artificial Intelligence in Digital Customer Experience by CMSWire and VKTR INSIGHTS, 41% of CX professionals say adopting generative AI is a top-down initiative, while 22% say employees use it at their own discretion with no organizational guidance.
That discussions are happening at the executive level is a good start, but executives should take more of an initiative on discussions around AI implementation to contain any potential trust breaches.
“Discussions of AI belong in the boardroom,” said Dowling. “And the tone starts from the top.” Part of establishing trust around AI is the visible practice of using AI responsibly.
Conclusion: Nuanced, Privacy-First Approach to AI Implementation
Organizations are only just beginning to test the boundaries of what AI can do for them. But as Brinker noted in his keynote at the 2025 Ibexa Summit, one of the quintessential problems of the 21st century is that while AI and LLMs change and develop rapidly, organizations … don’t.
And yet, future-facing companies know that even if AI doesn’t have applications to their business now, it will in the future. And if they don’t develop an understanding of AI’s possibilities — as well as a framework to use the technology responsibly — their competition will.
When generative AI entered into the public consciousness Nov. 30, 2022 with the debut of OpenAI's ChatGPT, there was much anticipation and hype around its possibilities. While the hype is dying down, the possibilities remain. But these possibilities can come with irreversible decisions that can break consumer trust if not implemented thoughtfully.
Thanks in no small part to the data privacy culture that has evolved since the implementation of GDPR, organizations in the European Union are taking a nuanced, privacy-first approach to AI implementation.
Core Questions Around AI Ethics and Trust
Editor's note: Key questions surrounding ethical AI implementation and trust-building:
Why is an ethical framework crucial for AI adoption?
Without clear AI ethics and governance, businesses risk eroding trust with both customers and employees, leading to reduced engagement and potential regulatory scrutiny.
How should companies balance AI automation with authenticity?
Generative AI should enhance human creativity, not replace it. Organizations must set guidelines to ensure AI-generated content aligns with brand voice and quality standards.
What lessons can North America learn from Europe’s AI regulations?
The EU has taken a proactive approach with the Artificial Intelligence Act, while the U.S. remains fragmented in its AI governance. Companies operating globally must prepare for compliance challenges.
How can executives foster trust in AI within their organizations?
AI adoption needs leadership support. Transparency, accountability, and governance should start from the boardroom to ensure AI implementation aligns with ethical business practices.
What risks come with over-reliance on generative AI?
AI-generated content can lead to customer fatigue and loss of engagement if not used thoughtfully. A personalized AI approach can help retain authenticity.
Learn how you can join our contributor community.