A big block of computer hardware draped in an EU blue flag with yellow stars in piece concerning EU AI Act and other AI legislation.
Feature

Why the EU AI Act Will Influence Marketers' Next Act for Data Privacy

6 minute read
Pierre DeBois avatar
By
SAVED
Explore the impact of the upcoming EU AI Act on data management and AI governance. Learn how it sets a global precedent and what U.S. businesses should expect.

The Gist

  • Global Implications. The EU AI Act is set to become a benchmark for AI regulation, affecting U.S. businesses, too.
  • Legislative shifts. No federal AI legislation exists in the U.S., but state-level initiatives are gaining traction.
  • Marketing alert. AI in marketing faces increased scrutiny; CMOs must navigate new legal complexities.

In the era of data privacy, marketers are increasingly vigilant about the next wave of AI legislation and the impact on their data management strategies. Attention is shifting to Europe as the EU AI Act nears final approval. Reviewed in June, the AI Act is the first comprehensive legislation specifically targeting artificial intelligence.

The EU AI Act impact will not only shape data collection and usage for organizations committed to privacy measures but also set a precedent for AI legislation in global territories, including the United States.

What Is the EU AI Act?

The EU AI Act serves as a framework for managing AI systems for organizations operating within the EU, with varying levels of regulatory scrutiny. Designed to promote safe AI practices, the act identifies unacceptable risks and specifies conditions requiring human oversight. It aims to prevent the development of high-risk systems through stringent requirements while subjecting low-risk systems to minimal transparency obligations. This approach aims to safeguard the public from systemic harms and ensure transparency in privacy and data protection compliance measures.

A robot child studies a book designed to look like the EU flag with a blue background and yellow stars in piece about the EU AI Act and other AI legislation.
The EU AI Act serves as a framework for managing AI systems for organizations operating within the EU, with varying levels of regulatory scrutiny.

A Broad Territorial Scope

The EU AI Act has a broad territorial scope, applying to AI system providers in the EU market, whether operating within the EU or another country. The AI legislation categorizes AI applications based on their degree of risk to social and civil rights, delineating three distinct risk levels.

The First Category

The first category calls for the banning of applications and systems that pose an unacceptable risk. Examples include government-run social scoring systems similar to those used in China, real-time facial recognition, emotion recognition systems, and other "social scoring" methods deemed too risky.

The Second Category

The second category imposes specific legal requirements on high-risk applications, such as a curriculum vitae (CV)-scanning tool used to rank job applicants.

The Third Category

The third category encompasses applications that are neither explicitly banned nor designated as high-risk, and they are largely left unregulated.

Embedded Objectives

The objectives embedded within the AI legislation align closely with the fundamental principles of the General Data Protection Regulation (GDPR). However, the fines under the AI Act exceed those of the GDPR, amounting to 30 million Euros or 6% of revenue, whichever is higher.

Related Article: AI, Privacy & the Law: Unpacking the US Legal Framework

History of AI Legislation in the US (So far)

The U.S. is seeking an appropriate approach to AI governance that emphasizes transparency, accountability and privacy, while aligning with its legal framework. Current activity indicates that the United States aims to establish legal guidelines distinct from EU directives.

Two Proposals

However, no federal legislation specifically dedicated to AI regulation has been enacted. Two proposals, the Algorithmic Accountability Act and the American Data Privacy and Protection Act (ADPPA), have been introduced to evaluate the potential high-risk impact of AI systems on individuals. Neither has been passed into law, although some experts believe the ADPPA is likely to be reintroduced in Congress.

Voluntary Commitments

Meanwhile, President Joe Biden convened seven leading figures in U.S. AI development — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — to secure a series of voluntary commitments aimed at fostering responsible AI development and usage. These commitments include collaborating with the government and other stakeholders to establish standards and best practices. Additionally, the Biden administration announced the formation of an AI Advisory Committee to guide the government in developing and implementing responsible AI policies and programs. However, some critics argue that these voluntary measures fall short in addressing potential social risks associated with AI, such as bias, discrimination and job displacement.

State Level AI Legislation

Consideration of AI legislation remains at the state level and is in various early stages of adoption. The Electronic Privacy Information Center (epic.org) reported that six local AI-related laws have been enacted, all of which are amendments to existing data privacy laws. These newly enacted frameworks serve specific purposes; for example, a New York City bias audit regulates AI usage in employment decisions, affecting employers specifically. In contrast, the EU AI Act targets a broader range of AI users and developers. The California Privacy Rights Act, an amendment to the California Consumer Privacy Act, introduces provisions like additional data retention limitations.

California’s Data Privacy Legislation

California's data privacy legislation is noteworthy, as it incorporates influential elements from the GDPR. The GDPR mandates that organizations implement measures to safeguard personal data and ensure individuals have control over their information, including the right to access, rectify and erase their personal data.

AI & Personal Data Risk

The GDPR has drawn attention to data and associated systems, including analytics. With AI, the risk of processing personal data without individuals' knowledge or consent is heightened compared to standard analytical processes. The potential for rapid, automated decision-making poses significant implications for individual rights.

Organizational Scrutiny

Organizations must scrutinize their legal basis for processing personal data in AI applications, whether it's by obtaining explicit consent or demonstrating legitimate interests. The advent of open-source large language models (LLMs) is encouraging organizations to train using their own company data for specific use cases. Developing these domain-specific language models also necessitates that organizations implement technical and organizational measures to ensure the security and confidentiality of any personal data used in their homegrown AI models.

Advocating for Standardized Guidelines

Companies offering AI solutions are reacting by advocating for standardized guidelines that consider a broad range of stakeholders. GitHub, along with other purveyors of open-source AI such as Hugging Face, submitted a paper to the EU outlining ways to better support open-source AI within the framework of the EU AI Act. The regulatory suggestions aim to assist researchers and hobbyists who have no commercial usage, proposing AI development rules that are more comparable to those for commercial technology.

A Single Standard

This suggests an environment in which the U.S. is likely to incorporate elements of the EU AI Act into its own AI legislation, whether at the federal or state level. Alex Engler, a Fellow in Governance Studies at the Brookings Institution, noted in Decoding The EU Intelligence Act, a panel discussion hosted by Stanford HAI, that companies would prefer to adhere to a single standard. Engler explains that a unified standard is preferable to incurring the substantial costs associated with meeting two divergent, far-reaching regulations.

Data Privacy Trend

Evidence of this trend is already apparent in data privacy, as many U.S. states have modeled their data privacy regulations after the GDPR.

The Complexity Underlying AI

Interest is likely to grow as the complexity underlying AI becomes more rigorously examined, if not better understood. James Grimmelmann of Cornell Tech, speaking on a New York University panel exploring AI issues, discussed how some copyright rationales, such as a case examining whether Google's search engine could include books and text, change in the context of AI. In the Google Search example, search engines were not in competition with authors for the source material but were merely disseminating it, thus not infringing on a creator's copyright. In contrast, AI poses a potential challenge to copyright because of its ability to recreate media and content with remarkable accuracy.

Related Article: Generative AI: Exploring Ethics, Copyright and Regulation

Other Ways Navigating AI Legislation Will Impact Innovation

Businesses utilizing AI — which these days includes nearly everyone — will face additional implications from the EU AI Act. For starters, the legislation mandates that AI developers establish safeguards to prevent their systems from generating illegal content. This is particularly noteworthy given the current legal challenges associated with using copyright-protected content in training datasets without explicit permission. Under the Act, users of AI must be aware of the permissions associated with their training data. This would mean companies are prohibited from scraping biometric data from social media to create databases.

Partner AI Usage

It also entails understanding how your partners are utilizing AI. Imagine operating a platform enhanced with generative AI-supported features, whether the model is provided by Bard, ChatGPT, Claude, or developed from open source. In such a scenario, an organization would need to conduct transparency audits to inform content creators that their content or media has been used to train algorithms.

Legal Liability Unclear

Data plays a significant role, but like many aspects of AI, its role in legal liability remains unclear. In AI, data is part of a complex supply chain ecosystem where various actors influence different elements. Business leaders should question which element is responsible for malicious intent. Answering that question helps identify where infringement should be prevented.

Far From Straightforward

But answering that question, like many issues in technology, is far from straightforward. With current AI systems, the lines between elements are blurred. Grimmelmann illustrated this with an example involving defamation. If an AI system states that a famous actor is shooting up a building, is it the model or the prompt that should be flagged for defamation? Grimmelmann further noted that as AI transitions from the periphery to mainstream attention, people are increasingly seeking AI guidance in real-world situations where they perhaps shouldn't.

Learning Opportunities

Approval Imminent

Approval of the EU AI Act is anticipated in early 2024. Fortunately for marketers, a transition period is expected to allow for compliance adjustments before the Act's enforcement date.

Related Article: Safeguard Generative AI to Protect Customer Privacy

Final Word on AI Legislation

Companies aim to create personalized experiences for each customer to deliver meaningful interactions. As updating those experiences and managing associated behavioral data increasingly involve AI solutions, it becomes essential for CMOs and marketing teams to be aware of potential legal issues posed by the EU AI Act and other AI legislation. It's a must-do requirement, not just a nice-to-have effort.

About the Author
Pierre DeBois

Pierre DeBois is the founder and CEO of Zimana, an analytics services firm that helps organizations achieve improvements in marketing, website development, and business operations. Zimana has provided analysis services using Google Analytics, R Programming, Python, JavaScript and other technologies where data and metrics abide. Connect with Pierre DeBois:

Main image: noah9000 on Adobe Stock Photo
Featured Research