EU flags
Feature

Why Tech Companies Will Likely Ignore the EU’s AI Act

7 minute read
David Barry avatar
By
SAVED
Despite optimism surrounding the proposed AI regulation coming out of the EU, many vendors will likely just ignore it. Here's why.

On June 14, the European Parliament passed its draft AI guidelines, one of the first major laws seeking to regulate artificial intelligence. The vote was only one in a series of steps the law will go through before being finalized, but it set off a wave of reactions in its wake.   

Stanford University researchers compared the current state of the generative AI models of the major tech companies — think OpenAI, Meta, Google — and the draft AI requirements and found no company currently met the regulations and it would be unlikely for most to ever fully comply.

The Reaction to the EU AI Draft Proposal

Under the EU’s AI proposals, tool developers will have to identify AI-generated content and summarize copyrighted data used in training, among other steps. Most providers don’t currently do this and may choose to ignore the guidelines rather than spend millions trying to become compliant.

Unsurprisingly, the proposal met with negative reaction in the U.S. tech world, but some European businesses and organizations also shared the sentiment. On June 30, 160 executives signed an open letter criticizing the law. Among the executives were some of Europe's biggest business leaders from companies including Renault, Heineken, Airbus and Siemens.

The letter, first reported by the Financial Times of London, stated the current regulation could “jeopardize Europe’s competitiveness and technological sovereignty.”

France's former digital minister Cedric O helped organize the letter. In an interview with Reuters, he said the signatories object to the current version because it marked a move by the European Parliament from a risk-based approach to a technology-based approach.

The letter warned that under the proposed EU rules, technologies like generative AI would become heavily regulated and companies would take their know-how and money offshore to continue doing business.

Dragos Tudorache, who co-led the drafting of EU proposals, said in the same Reuters article that the signatories had not actually read large parts of the law.

"I am convinced they have not carefully read the text but have rather reacted on the stimulus of a few who have a vested interest in this topic," he said, adding that the suggestions made in the letter are already in the draft legislation.

Nevertheless, the letter and the surrounding discussion goes to the heart of a problem that will have to be resolved before the generative AI will be a trusted business tool.

Related Article: How Generative AI May Fit Into Your Organization

EU Law Paves the Way for Future Legislation

Two or three forces are at play here, said Matt Aldridge, principal security consultant at OpenText, and as things stand it is difficult to determine the result in the short term.

On the one hand, he said we have the remarkably rapid innovations currently happening in AI, with significant support from some of the largest corporations in the world. This has significant impact in terms of the modern technologies and capabilities reaching the market.

But, he continued, this comes hand in hand with significant risks. He said these risks include risks to individual freedoms, to legal compliance for copyright holders, and to the wider job market — and potentially the stability of entire economies.

Aldridge argues that we need regulation and legal frameworks to ensure future stability and to build a solid framework on which to build future AI-based commercial services. The challenge, however, is for proposed legislation to keep pace with the exponential growth in AI capabilities we are currently witnessing. "If a legal framework is not practical to implement, it cannot realistically be enforced. The EU is making great efforts to ensure that their AI act is both comprehensive and future-facing, and from the studies made by the team at Stanford it appears that compliance is an achievable goal for the existing key players in this space.”

What remains to be seen is how motivated organizations are to comply with these standards, and to what extent these initiatives will have an impact on their plans and operations, he continued.

He believes that from a corporate brand protection point of view, companies will likely prioritize addressing the requirements of the AI act.

It is also highly likely that other legislation will appear around the world that takes many examples from the leadership being shown by the EU, he added, and over time a common baseline of legal requirements will emerge for global compliance.

"If I am correct that the key players will want to comply, then the question is how long will this take and where will this feature among their commercial priorities?" Aldridge said. "I am optimistic that leaders will want to comply with these regulations, but I suspect that timings of doing so will vary dramatically. We may need to see some further evolution of the act to get everyone fully on board with it.”

Related Article: Are You Giving Employees Guidelines on Generative AI Use? You Should Be

Regulation vs. Innovation Is a 'False Dichotomy'

Greater public awareness and debate around the ethical use of AI will help ensure it becomes a positive force in our economy and society, said Peter van der Putten, director of Pega’s AI Lab. Both the public and tech firms will benefit from good governance like the EU AI act.

Some of the technology players have argued that regulations stifle innovation or have even stated they may no longer operate in certain markets and/or refuse to comply. He believes this regulation versus innovation framing is a false dichotomy.

“These companies cited in the recent Stanford study — and the broader technology industry in general — need to understand that there is no long-term sustainable future for irresponsible use of AI, and sensible AI regulation will reward and foster innovation of trustworthy AI that benefits both consumers and companies alike,” he said.

Van der Putten argues it is possible to have both regulation and innovation, and these new regulations can be remarkably effective with proper buy-in. "All of these companies need to work together to ensure an innovative and ethical future for tech companies and consumers alike when it comes to AI. Otherwise, we are right back to where we started — if not going further backwards — before the act passed,” he said.

Compliance should not be seen as something binary, he continued. Foundation models are trained on millions of documents and billions of tokens. In terms of disclosing what data is used for training, the AI Act is not asking for finite lists of documents, but more about high-level corpora used or scraping tactics applied, and what steps were taken to aim to stay within fair use of copyrighted content.

Learning Opportunities

For the earlier versions of these foundation models, the large tech players had no issue disclosing this. It is only to be expected, now that competition is increasing, that providers become more tight-lipped, but that is why we need rules, he pointed out.

“You want to reward — not punish — the players that provide this information, and you can only enforce that with basic and reasonable rules that everyone knows and needs to comply with. The requirement to disclose that certain responses or interactions are AI-driven is also reasonable — they do not apply to use cases such as showing a simple recommendation or message, where it can be implied that it is coming from an automated system .... All of this is required to rule out situations when the impression is given that you are talking to a human, when you are not. That is not just a legal requirement — I would call that a transparent way of doing business,” he concluded.

Related Article: Ready to Roll Out Generative AI at Work? Use These Tips to Reduce Risk

The Problems with Regulation

The EU AI Act offers a solid foundation, but limitations and further details still need to be addressed, said Kamales Lardi, a digital and business transformation consultant.

The text takes a traditional regulatory and compliance approach (risk-based and horizontal) to the dynamic and rapidly changing generative AI landscape, she said. Lardi believes regulators will continue to play a catch-up game in this environment or end up limiting the potential innovative value of generative AI.

Companies may need to make substantial changes to their data collection and management practices to meet the new data privacy standards set by the legislation, she said. Furthermore, the EU AI Act may also stimulate innovation in the AI industry, with companies racing to develop more ethically sound and transparent AI algorithms that meet the new regulatory guidelines.

She added it will also mandate companies to conduct risk assessments and carry out human oversight of AI systems. Companies that fail to comply with the Act's provisions may face severe penalties, including fines or even legal sanctions, with the result being that it may force companies to invest in compliance measures and expertise to ensure they operate lawfully and ethically in their AI-based business operations.

The law raises other practical problems according to Lardi, saying it fails to sufficiently address topics around copyright, including its lack of acknowledgement of the ongoing debate around what is considered copyright boundaries.

However, she also believes the law may open up new opportunities for business models and offerings, as companies or consultancies offering themselves as authorized representatives in the EU.

“This will be an interesting development .... Ultimately, the businesses that successfully adapt to the new regulatory environment will be the ones that thrive in the era of AI,” she said.

Legislation and Innovation?

Andrew Pery, AI ethics evangelist at intelligent automation company ABBYY, noted that the recently approved compromise text of the EU Artificial Intelligence Act is now in the Trilogue negotiation phase between the EU Commission, the EU Parliament and the Council of the European Union — meaning nothing is set in stone as of now.

It's unclear to what extent the EU Artificial Intelligence Act will impose much-needed guardrails against the potentially harmful impact of foundational models once ratified, he said. Legislation always tends to lag the pace of technological innovation, he continued, saying it will take at least 12 to 24 months to operationalize the Act.

The Act, he added, also requires adherence to third-party standards-based risk management frameworks. Such standards, like CEN-CENELEC and NIST, are still in the early stage of development. In the meantime, conformance testing will be left to internal audit processes. More to the point, foundational models are constantly evolving and have known deficiencies and, as in the case of other disruptive technology, there is a propensity for developers to adopt a 'release now and fix later' approach to gain first mover advantage.

He cited the example of foundational models here, which currently lack effective filters to remove inappropriate content or the systemic biases baked into the training data, 60% of which was scraped from the internet's knowledge base up to 2021.

The Stanford analysis noted these deficiencies, which shows that all foundational models fall short of complying with the EU Artificial Intelligence Act requirements. “Ultimately, the utility of foundational AI systems will depend on contextualizing use cases by applying rigorous data governance strategies to mitigate bias, inaccurate results, and harmful content,” Pery said.

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Guillaume Périgois | unsplash
Featured Research