The Gist
- Global focus. UK's CMA pushes for ethical AI and foundational model principles, affecting global AI use.
- Transparency push. Report stresses need for marketers and customer service to be transparent in AI applications.
- U.S. catch-up. While the U.S. lags in AI regulation, it’s a growing focus for American policymakers.
While Google faces a major antitrust case brought forth by the US Department of Justice, UK regulators across the pond want to prevent antitrust practices involving artificial intelligence and foundational models (FM).
The Competition and Markets Authority (CMA) — Britain’s antitrust regulating body — published a report this week following its initial review of foundation models and proposed principles to protect consumers and promote healthy competition.
What Are foundational models?
Foundation models are large machine learning models trained on vast amounts of data. And why should marketers and customer experience professionals in North America care?
For starters, it's another step by a government regulator to foster responsible usage of AI. Ultimately, that trickles down to marketers and customer experience leaders who should be transparent behind AI and content and campaigns and in AI and customer experience interactions like conversational chatbots and customer support. And, soon, they could be compelled to do so.
Remember: the United States may be behind Britain and the European Union in developing AI regulations and standards but it’s certainly on the minds of The White House.
“The speed at which AI is becoming part of everyday life for people and businesses is dramatic,” Sarah Cardell, CEO of the CMA, said in a statement. “There is real potential for this technology to turbo charge productivity and make millions of everyday tasks easier — but we can’t take a positive future for granted. There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”
CMA’s Call to Action and the Impact of Foundation Models
As for the impact of AI foundation models, the CMA wants principles that guide responsible, accountable and ethical use of AI while respecting consumer transparency.
According to the CMA, the first public foundation model was released by OpenAI in 2018. About 160 foundation models have been developed and released, including OpenAI’s ChatGPT and Microsoft’s Copilot. These models can transform a range of industries and “how we live and work. According to the CMA, businesses will be able to:
- Create new and better products and services
- Create easier access to information
- Help with all kinds of tasks, both creative and administrative
- Create potential scientific and health breakthroughs, often at lower prices.
Related Article: AWS Unveils Latest Tools for Developing With Generative AI
Foundational Model Principles: Accountability, Access and More
The CMA's principles on AI Foundational Models aim to avoid feeding consumers false information, exposing them to AI-enabled fraud and things like fake consumer reviews.
Here’s what the CMA proposes specifically:
- Accountability: Foundational model developers and deployers are accountable for outputs provided to consumers.
- Access: Ongoing ready access to key inputs, without unnecessary restrictions.
- Diversity: Sustained diversity of business models, including both open and closed.
- Choice: Sufficient choice for businesses so they can decide how to use foundation models.
- Flexibility: Having the flexibility to switch and/or use multiple foundation models according to need.
- Fair dealing: No anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.
- Transparency: Consumers and businesses are given information about the risks and limitations of foundation model-generated content so they can make informed choices.
The CMA over the next few months expects to engage stakeholders on these principles before finalizing them. Ultimately, the CMA’s principles will inform the CMA’s approach to the development and use of AI including when it assumes new responsibilities under the Digital Markets, Competition and Consumer Bill currently going through Parliament, according to CMA officials.
Related Article: Microsoft, Google, OpenAI Respond to Biden's Call for AI Accountability
Foundational Models Impact on Marketing, Customer Experience
So the big question remains for marketers and customer experience professionals: What do I need to know now about foundation models and how does it impact my execution of content, campaigns and customer support and customer experience strategies?
Here are some potential impacts of foundational models and what the CMA report cites.
Potential Applications of FMs in Consumer-Facing Operations
Foundation models can be utilized in a wide range of applications, from content creation to customer support. Marketers can leverage these models to automate content generation, while customer experience professionals can use them to enhance chatbots and support systems. In customer-facing operations, chatbots and AI-assistants could enhance productivity, efficiency and the overall customer experience.
Foundation models are being utilized in various consumer-facing applications, including search, social media and language services. Their use is anticipated to grow rapidly, with significant value seen in customer service and marketing/sales, according to McKinsey.
Marketing and sales functions could benefit from efficient content creation, personalization and brand advertising, provided there are safeguards against risks like plagiarism or copyright infringements. Embrace the versatility of foundational models but ensure that their application aligns with the brand's voice and customer expectations.
Foundational Models Impact on Customer Data Management Strategies
According to the CMA report, the extent of customer data feedback effects (the ability of FMs and FM developers to make use of data generated by their usage, to “learn” and improve its performance) at the post-deployment stage could be important for considering how competition in foundation models-powered search will develop, according to report authors.
For example, firms could use data about how a consumer interacts with a chatbot answer engine (e.g. what questions they ask, and how they react to the response) to fine-tune the FM to generate more useful answers.
“Significant data learning effects could increase the risk that firms with access to large volumes of consumer data can gain market power in FM powered search, potentially insulating them from competition,” CMA researchers wrote. “... We will continue to monitor the developments in this area and the potential impact on competition in search services.”
The likely best practice here for customer experiences leaders: transparency in how you obtain and use customer data behind AI chatbots.
Monetization Strategies for Foundational Model Services
The report mentions different ways firms are monetizing foundation models services. This provides insights into potential revenue streams and business models.
When implementing and using this technology, consider various monetization strategies, from subscription models to pay-per-use, and determine which aligns best with your target audience and business goals.
Consumer Protection Concerns With Foundational Models
The report highlights potential harms, such as misleading outputs and hallucinations from foundation models. For marketers, this underscores the importance of ensuring that AI-generated content is accurate and aligns with brand values. For customer experience professionals, it emphasizes the need for transparency and accuracy in AI-driven interactions.
Brands should implement rigorous testing and evaluation processes to minimize errors. Consider watermarking or other methods to disclose AI-generated content, ensuring transparency and building trust.
Consumer Understanding & Interactions with Foundational Model-Generated Outputs
The report discusses the importance of consumer understanding and the need for clear disclosures when interacting with FM-generated responses. This is crucial for building trust and ensuring that customers feel informed.
Marketers and customer experience professionals should clearly communicate to customers when they are interacting with AI-generated content. Provide educational resources or FAQs to help them understand the capabilities and limitations of the technology. The potential for user manipulation using FMs is a significant concern. Marketers need to be cautious about how AI is used in advertising to ensure ethical practices.
Establish clear ethical guidelines for AI usage in marketing campaigns. Ensure that AI-generated content does not mislead or manipulate users, and always prioritize transparency.
Uncertainties in Consumer Interactions With Foundational Models
There are uncertainties about whether consumers can identify false information provided by an FM application or know they are interacting with an FM-generated output. This can impact trust and overall customer experience.
Regularly gather feedback from users to understand their perceptions and experiences with AI interactions. Use this feedback to refine AI usage policies and improve transparency.
All About AI Policies and Ethics
As marketers and customer experience professionals craft their AI usage policies, it's essential to prioritize transparency, ethical practices and continuous evaluation. The potential of foundational models is vast, but it's crucial to navigate their implementation with care to ensure a positive and trustworthy customer experience.