Two traffic lights hang over an urban intersection with a dramatic dark sky in the background symbolizing he needs for Generative AI Safeguards.
Feature

Safeguard Generative AI to Protect Customer Privacy

7 minute read
Scott Clark avatar
By
SAVED
Without stringent safeguards, generative AI risks leaking sensitive personal details and eroding consumer trust.

The Gist

  • Privacy priority. Generative AI safeguards are essential for maintaining user trust and data security.
  • Consumer concerns. High demand for transparency and regulation around generative AI impacts brand trust.
  • Legal landscape. Evolving privacy legislation adds layers of complexity and risk for generative AI usage.

As generative AI continues to be adopted and used across customer-facing applications and services, protecting user privacy has become a business priority. With the ability to generate data, text, code, media and more, these large language models have the capacity to ingest tremendous amounts of customer data. Without stringent safeguards, generative AI risks leaking sensitive personal details and eroding consumer trust. Let's discuss the importance of generative AI safeguards and how brands are implementing them to ensure their customers's privacy and trust are maintained.

robot and child

How Generative AI Affects Customer Privacy

Consumers today are aware of the potential for the misuse of their personal data. A 2022 Statista survey revealed that 42% of those polled said that they were concerned or very concerned about their online data. Additionally, a 2023 Statista marketing report indicated that 73% of respondents are using generative AI tools, such as chatbots, as a part of their work. Given the widespread and rapid adoption of generative AI applications, combined with consumer privacy concerns, it behooves businesses to be aware of how generative AI impacts data privacy, and what they can do about it.

Generative AI systems are often fueled by vast amounts of customer data, from personal information to conversation transcripts. To train complex machine learning models in text, image, video and code generation, they ingest datasets that contain sensitive user details. Without proper safeguards, these AI systems risk exposing private customer information in their outputs, whether unintentionally or inadvertently. For instance, in March 2023, a user reported that ChatGPT was sharing personal information about other users, as well as their prompt and conversation history.

Generative models have the potential to replicate copyrighted data or media, reveal confidential conversations, or propagate biases if not developed responsibly. A 2023 StoryStream survey revealed that 94% of those polled want more transparency and regulation around the use of generative AI in marketing and advertising. Additionally, the survey indicated that 58% said that they would be more likely to trust a brand that openly discloses its use of generative AI.

Generative AI holds the capacity for both promise and peril when it comes to leveraging customer information. Brands must prioritize data security and responsible AI principles to avoid eroding consumer confidence in this transformative technology. Providing transparency around how these AI systems work can help maintain user trust that their data is being handled appropriately. 

Transparency reports may even be necessary to disclose data practices to users. Jonathan Moran, head of martech solutions marketing at SAS, an analytics, AI, and data management software and service provider, told CMSWire that most consumers are aware of generative AI and the work that it does. "As such, transparent communication with consumers should be used to create loyalty and trust,” said Moran. “When customers know that organizations are collecting (data) and then deriving insight (analytics) and suggesting content (generative AI) based on personal data, they should do so in a responsible and protective manner, to create relational (not transactional) trust." 

Amit Sood, CTO and head of product at Simplr, a US-based customer service outsourcing platform provider, told CMSWire that safeguarding customer data should be a top priority for any AI practitioner. "AI chatbots have the potential to enhance customer experiences, but that should never come at the cost of exposing customer data. There are no standards or protocols in place when it comes to feeding personal data into a chatbot, and this heightens the risk of privacy issues or potential phishing schemes. Effective and strong guardrails protect companies and their customers from these disastrous incidents." 

Privacy Legislation Will Impact Generative AI

Expanding privacy legislation and compliance standards such as SOC 2 have major implications for companies using generative AI. Regulations such as the GDPR and CCPA impose strict consent, audit and assessment requirements around collecting and processing the personal data that drives these systems. Additionally, Indiana, Iowa, Montana, Oregon, Tennessee, Texas and Utah have enacted data privacy legislation that will go into effect this year.

Randy Lariar, generative AI expert and director of big data and analytics at Optiv, a cybersecurity consultancy and solution provider, told CMSWire that it is clear that regulatory and operational risks associated with generative AI are only going to increase. "State, country, and continental government bodies are all putting forth potential regulations and laws that could impact how organizations work with AI. Legal challenges to firms using generative AI have already started and are likely to increase."  

Generative models must be carefully designed to avoid unlawfully exposing customer information. Rigorous security, access controls and responsible governance procedures that are mandated by SOC 2 will help to minimize risks and ensure ethics. However, regulations are rapidly evolving worldwide, so developers must continually adapt their systems and practices to address emerging laws. 

Ultimately, enabling responsible innovation with generative AI means investing heavily in privacy and governance to comply with regulations and maintain public trust. Failure to do so can result in fines and eroded consumer confidence and trust that hinders adoption and acceptance. By honoring both the letter and spirit of expanding privacy laws, brands can develop ethical generative AI that customers will feel confident engaging with.

Lariar suggested that it is important to recognize that the transformative potential of AI and the heightening regulatory and operational risks mean that businesses will need to have a plan. “Some are already drafting AI policies, governance processes, staffing plans, and technology infrastructure to be ready for the surge in demand for AI capabilities and associated risks.”

How Can Guardrails Be Put Into Place

Standard privacy strategies are also applicable to AI applications. Ashu Dubey, co-founder and CEO at Gleen, an enterprise-level generative AI platform provider, told CMSWire that one major guardrail they have implemented has been to refuse to share any data their system collects with any third parties. "One of the biggest ways data privacy can be misused and mishandled is when brands turn first-party data into third-party data.” Because their AI platform works with their client’s customers, they recognized the need for data privacy and made it a founding principle to not share any data with third parties under any circumstances.

Putting guardrails into place for existing generative AI applications requires a different approach than those that will be implemented for generative AI applications that are still in development. For existing generative AI applications, guardrails should include:

  • Perform robust testing to identify harmful biases, privacy risks, or policy violations. Continuously monitor models after deployment — it’s an iterative process.
  • Implement controls like blocklists, allowlists and suppression lists to constrain unsafe content generation.
  • Employ “human-in-the-loop” approaches where people review high-risk outputs before release.
  • Develop easy user reporting channels to flag policy-violating or abusive AI behaviors.
  • Frequently retrain models on new data that represents the latest norms and values.
  • Maintain the ability to disengage problematic model versions and roll back to a previous safe checkpoint. ChatGPT users saw this in action when OpenAI removed web browsing functionality from ChatGPT after problems were reported.

For applications that are still in development, the following guardrails should be implemented:

  • Perform ethical risk assessments early in the R&D process and design to mitigate any identified dangers.
  • Adopt safety-focused frameworks like Responsible AI Practices when formulating new models. Both Google and Microsoft have embraced Responsible AI Practices.
  • Seek diverse input and feedback to uncover any potential risks the team may be blind to. 
  • Implement algorithmic procedures like “social-good prompting” to steer models toward helpful, harmless results. Social-good prompting is an approach AI researchers are exploring to encourage generative AI models to produce beneficial outcomes that are aligned with ethical values.
  • Engineer selective memory and information filtering techniques to control knowledge recall. 
  • Design human oversight into the generative process to enable course correction.

Sood suggested that brands should be careful about the data that is “fed” to the large language model. "Scrub all personally identifiable information (PII), including anonymizing data so it’s not customer-facing. This shouldn’t live in the models. Leverage the latest version of LLMs. Ensure that the employees overseeing the use of the bot are also following strict data privacy protocols." 

Other best practices Sood recommends include building checks into the bots — something that is especially important for teams who try to deploy their own LLMs. “Develop bots that mimic different personas to conduct further tests. Launch company-wide ‘break the bot’ contests where employees try to find holes in the tool prior to it going live.”

The Challenges of Generative AI Guardrails

Guardrails such as access controls and anonymization aim to safeguard privacy, however, generative AI poses unique challenges. Large language models ingest massive datasets, often inadvertently containing sensitive personal information that was used to train them. Anonymizing or restricting such data can degrade the quality of generative outputs, while models that use raw personal data risk exposing, replicating, or inferring private details without rigorous controls. 

The ongoing monitoring of outputs is critical yet challenging due to the huge volumes of data that are generated. Evaluating privacy risks requires a deep understanding of what constitutes harmless personalization versus intrusive privacy violations. Customers appreciate personalization, but not if it risks their personal information being revealed. Even with robust data security, generative models may unintentionally memorize confidential inputs and later regenerate them. 

Consumer attitudes and regulations also continue to evolve, meaning rigid guardrails struggle to adapt. Ultimately, brands must weigh generative AI's benefits against potential data privacy harms through impact assessments. Minimizing risks while retaining usability is an intricate balancing act, requiring a careful approach to technical, ethical, and legal diligence. Businesses that are pursuing responsible innovation must minimize the use of raw personal data, perform exhaustive testing, implement a human review process and regularly perform iterative updates. 

Learning Opportunities

Final Thoughts on Safeguarding Generative AI

Safeguarding generative AI isn't a simple process, but it shows a brand’s commitment to protecting consumer trust and the ethical use of technology. Ensuring data privacy in generative AI requires cross-functional collaboration, continuous tuning, and transparency to earn public confidence. With ethical safeguards in place, generative AI can enhance the customer experience and build relationships based on trust.

About the Author
Scott Clark

Scott Clark is a seasoned journalist based in Columbus, Ohio, who has made a name for himself covering the ever-evolving landscape of customer experience, marketing and technology. He has over 20 years of experience covering Information Technology and 27 years as a web developer. His coverage ranges across customer experience, AI, social media marketing, voice of customer, diversity & inclusion and more. Scott is a strong advocate for customer experience and corporate responsibility, bringing together statistics, facts, and insights from leading thought leaders to provide informative and thought-provoking articles. Connect with Scott Clark:

Main image: monticellllo on Adobe Stock Photo
Featured Research