Artificial intelligence (AI) has been stirring up human fears, dreams, promises, stock markets and technological innovation for years. Will it destroy humanity? Is it stealing our intellectual property? Will it take our jobs? Can it make our jobs easier?
Recently, the European Union (EU) took it upon itself to propose a set of regulations to put guardrails in place for the development and use of AI technology to protect consumers — and humanity — from AI’s potential risks. The EU’s “AI Act” is part of a package of policies intended to support the development of AI while protecting human rights, safety and ethical principles. The package also includes the “AI Innovation Package” to encourage ethical development of AI, and the “Coordinated Plan on AI,” which aims to encourage investment in AI.
This early entry into the legal landscape of AI could well become the de facto global standard for governing AI. So if you work in AI development or use AI in your company and hope to do business in Europe, you need to understand what the “AI Act” aims to do.
Hopefully, this law and others like it will make AI safer, protect humans from its potential dangers, guarantee it isn’t biased and allow us to happily build, use and enjoy these brilliant AIs to teach us, make our work easier and take over tasks none of us want to do.
When Was the EU 'AI Act' Approved?
The European Parliament passed the “AI Act” on March 13, 2024.
The law is scheduled to take effect later this year.
Why Did the EU Approve the 'AI Act'?
The goal of the legislation is to ensure that Europeans can trust what AI offers. The package recognizes that most AI is innocuous and relatively harmless, but some AI has the capacity to create dangers and unacceptable risks for humans. The legal framework defines the levels of risk presented by AI and prohibits some of AI’s most dangerous practices.
By approving this act, the EU hopes to prevent scenarios where, without knowing why an AI made a decision, humans are not given jobs, loans, asylum, an education and other rights.
Currently, it is difficult for the average consumer to know if the video, content and decision maker behind their job and other applications is human or AI. This makes the internet and the world confusing, potentially dangerous and peppered with bias that could potentially lock people out of opportunities for loans, housing, education and health care based on their race, accent, economic background and other factors the AI is, unknown to users, filtering for.
There are also existential fears about AI. It could be a disruptive technology that will improve all our lives. Or if it’s allowed to get too powerful, it could, like HAL in “2001: A Space Odyssey,” turn on us and try to eliminate us. High-level technologists, including the computer scientists at the forefront of AI development, have issued warnings about this potential. “These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,” said Geoffrey Hinton, one of the early creators of AI at MIT Technology Review’s EmTech Digital conference. “Even if they can’t directly pull levers, they can certainly get us to pull levers.”
When OpenAI’s ChatGPT exploded into the technology landscape, tech leaders, legislators, governments and companies all over the world felt it was time to set up some legal protections before the technology goes much farther.
Where is the EU 'AI Act' Enforced?
The law applies to customer-facing products with AI, AI-specific tools and AI content being offered in the EU.
Compliance could be a complex endeavor, depending on the tools you use or build. In the global age of the internet, it can be difficult to track where your tools are used, so this act has global consequences.
Like the EU’s GDPR, which also affects technologies and companies well outside the EU.
Who Does the EU 'AI Act' Apply To?
The act directly affects companies operating in the EU: AI developers, companies that use AI to create content or communicate with customers, manufacturers or anyone else building projects that use or are AI.
But given the global economy, this can also include companies that are not based or operating in the EU. Companies that are serving the EU market with AI-created products or content or AI tools, regardless of where the company is located, also need to bring any AI tools, services or models into compliance.
Your leadership should form a team to discover what you need to do to become compliant with the act. Many companies have already formed ethical AI risk and responsibility programs. Those companies will have an advantage here. But this act is a significant step in regulating AI to be ethical and responsible, and it has teeth. If you haven’t already created a committee to deal with AI ethics, this is likely the event that forces you to create one.
How Does the EU 'AI Act' Work?
The “AI Act” includes various stiff penalties for companies that don’t comply with the law, according to the IAPP.
The act enables the EU to peform numerous regulatory actions related to AI, according to the European Commission, such as:
- Address risks created by AI applications
- Prohibit practices that pose unacceptable risks
- Establish a list of high-risk applications
- Set clear requirements for AI systems that engage in high-risk applications
- Define the obligations deployers and providers of high-risk AI applications have to society
- Require an assessment before any AI is deployed
- Put enforcement in place after a given AI system is placed into the market
- Establish a governance structure at the European and national levels
When the risk is high, the law applies more controls. High-risk applications include critical infrastructures: like transportation, education, safety, employment, public services, law enforcement, border control and asylum and the administration of justice and democratic processes.
When the risk is limited, the rules call for transparency, so consumers are aware an AI was involved in the creation of the content they’re consuming. Limited-risk applications include chatbots, AI-generated content and deepfake audio and video.
The regulations don’t get too involved in low-risk applications, which include video, game or spam filters.
Some uses of AI are completely exempt from the reach of the act, according to IBM, such as:
- Purely personal uses
- Models and systems developed solely for military and national defense
- Models and systems used only for research and development
- Free, open-source and low-risk AI models that publicly share their parameters and architecture are exempt from most AI Act rules but not all