A silver robot hand holds the scales of justice against a gradient background in piece about AI bias in CX and Latimer AI.
Feature

Overcoming AI Bias in CX With Latimer

5 minute read
Pierre DeBois avatar
By
SAVED
Learn how Latimer LLM tackles AI bias in customer experiences, ensuring culturally sensitive AI for improved marketing success.

The Gist

  • AI's crucial role. Personalization is key in marketing, with AI playing a crucial role in delivering customer experiences.
  • Latimer's mission. Latimer, created by John Pasmore, aims to address bias in AI models, focusing on cultural elements.
  • Data bias challenge. Many AI models have data bias, which can hinder delivering relevant content and manifest as cultural biases, including AI bias in CX.

Marketing success is increasingly reliant on technology use. Personalization is key, with AI playing a crucial role in delivering customer experiences while addressing AI bias in CX.

Many of today's models often have data bias, with limited or skewed information. This can hinder delivering relevant content and sometimes manifests as cultural biases in datasets.

Latimer, a large language model (LLM) created by John Pasmore, aims to address bias in AI models. It offers insights into challenges of developing AI assistants with cultural elements in mind.

What Is Latimer?

Pasmore founded Latimer as a teaching tool to help people understand how to craft better prompts that involve cultural norms and history, particularly as a learning tool about African-American and Hispanic cultures. 

Addressing Bias in AI Models

The model was created to address bias in AI models. Artificial Intelligence enables programmed devices to execute tasks that once required human intelligence to accomplish. However, AI systems are susceptible to scaling biases because they tackle tasks so quickly and can be limited to training data that is not updated with vital information. 

Wooden dominoes fall to one side in a piece about AI bias in CX.
AI systems are susceptible to scaling biases because they tackle tasks so quickly and can be limited to training data that is not updated with vital information. nuchao on Adobe Stock Photos

A Brief History of Inventor Lewis Latimer

Latimer LLM is named after Lewis Latimer, an African American inventor and technologist who is best known for his refining of the carbon filament for the electric light bulb. (He was a chief draftsman at Thomas Edison’s lab). He was also known for a variety of patents and inventions such as the development of the air conditioning system and a toilet system designed for railcars. Additionally, his patent expertise helped Alexander Graham Bell complete his patent filing for the telephone.

Related Article: Cultural Intelligence Improves the Customer Experience

How Latimer Works 

Latimer features a prompt entry page and a generative AI assistant interface similar to ChatGPT. However, it differs in its foundation, based on Meta's Llama-2 GPT, and is augmented with data on real-world cultural documentation, including historical events, oral traditional stories, literature, and current events related to communities of color. 

The Biggest Problem

The main challenge with generative AI chatbots is that their response quality is limited by their training data. Updating data with reasoned context is crucial to maintain accuracy and reduce bias.

Retrieval Augmented Generation

To address bias and avoid misinformation, Latimer uses a process called retrieval augmented generation (RAG) to improve the accuracy of finding and citing relevant sources. RAG involves loading data from sources, arranging it into small digital "chunks," and processing these chunks alongside the prompt to provide a solution. This approach provides a richer data set for the model to process, ensuring that responses are relevant and in natural language.

Proper Data Updates

Latimer enhances its technology with development resources to ensure proper data updates. The development team includes prominent cultural scholar Molefi Kete Asante, who contributes to the model's ongoing development. Asante is a distinguished professor of African, African American, and communication studies at Temple University.

Events & News

Latimer also has an exclusive contract with the New York Amsterdam News, a traditional newspaper that covers events and news for the Black community.

Related Article: Cultural Intelligence and CX: Lessons for Better CX From Around the World

Why Addressing AI Bias in CX Is Necessary 

Marketers require models that accurately represent the real world they serve and avoid AI bias in CX. Marketers manage brands and retailers that function as platforms, elevating the brand to attract a loyal customer base, similar to how Beyoncé and Taylor Swift do for their fans. This role also involves ensuring that products offered through these platforms get the details right, including in cultural marketing. For instance, Target withdrew a product after a teacher's TikTok video highlighted misidentified civil rights figures.

Reinforcing Good Decisions

Because generative AI is often used as a digital assistant, the underlying model must reinforce good decisions. For example, if a Target AI assistant continued referencing a product after it was pulled, it's clear that customer experience is at stake.

Related Article: Ensuring Cultural Sensitivity in Your Marketing and Advertising

Guardrails

Imagine the guardrails when the assistant provides specific information — its data must incorporate cultural details and the right associations to provide meaningful context.

AI & Bias Threat

Technology experts have long been concerned about how data bias can deteriorate the quality of AI models. Models can scale data bias into their decisions, altering related outcomes that impact real people's lives. For example, the American Civil Liberties Union (ACLU) noted how AI can be used to perpetuate housing discrimination. Another automation issue, facial recognition, was also highlighted in the CMSWire post.

Related Article: Addressing AI Bias: A Proposed 8th Principle for 'Privacy by Design'

Trust Erosion

Data bias is often reproduced in machine learning or large language models. The models need relevant data to ensure they address your query. Data bias erodes trust in the models and the very teams that support them. 

Susceptibility

Large language models are susceptible to data bias because they predict word associations based on their training, emphasizing elements based on indications in the prompt query.

Crafting Prompts

Let's say I craft a prompt stating, "Hey, I really liked my breakfast today. I ate eggs, toast, and..." An LLM can create suggestions based on what it predicts the other breakfast items are – these are the targets of the input. The target can be anything... With breakfast, that means "hash browns," "waffles," "pancakes," "French toast," "coffee," "steak," etc.

Subsets

Moreover, some of the items mentioned as potential targets can have subsets. Cereal is a good target suggestion for the breakfast prompt, but what type of cereal — "Raisin Bran"? "Corn Flakes"? Is there a particular brand — "Special K"? "Fruit Loops"? The list can go on.

Nuanced Choices

With cultural history, nuanced choices based on personal history can introduce a subset in a model prompt like the breakfast example. Someone who grew up in a traditional African American neighborhood can recall grits growing up, while someone with an Afro-Puerto-Rican upbringing can recall having mangu at breakfast.

Experiences Drive Choices

Experiences can drive numerous choices in a digital context. In the early days of digital marketing, marketers examined keywords on websites and later, social media posts, because their usage reflected potential customer interest in a product, service or brand. 

Learning Opportunities

Shift in Focus

Today, the focus of large language models has shifted from text classification to text generation. With personalization increasingly linked to AI assistants, marketers must understand how to provide context for cultural elements that avoid AI bias in CX when a large language model generates text and images for answers.

The Right Cultural Elements

Marketers should consider enhancing LLM data with the right cultural elements that avoid AI bias in CX. This can include models like Latimer, which usually comes with inspectable training data, and developing connections to the right cultural resources. The right sources can indicate the best way to reach an intended audience while elevating the nuances of culture without AI bias in CX.

The Search

The search is also why annual cultural celebrations such as Black History Month, which occurs every February, are important.

Refining the Data World

Data models understand different aspects of the world. It is essential for marketers to refine that world to include the right elements and avoid AI bias in CX. Brands can then plan context around its source data as guardrails for accuracy.

Final Thoughts

Currently, Latimer is managing its rollout with limited access, mainly aimed at universities. Marketers should follow Latimer's development as well as other technology discussions about using AI in culture and public issues. These discussions and initiatives will certainly impact decisions for how personalized AI delivers customer experiences. Personalized recommendations are an element in ensuring that today's algorithms deliver relevant connections that truly build audience interest and avoid AI bias in CX.

About the Author
Pierre DeBois

Pierre DeBois is the founder and CEO of Zimana, an analytics services firm that helps organizations achieve improvements in marketing, website development, and business operations. Zimana has provided analysis services using Google Analytics, R Programming, Python, JavaScript and other technologies where data and metrics abide. Connect with Pierre DeBois:

Main image: TechArtTrends on Adobe Stock Photos
Featured Research