EU votes to ban riskiest forms of AI and impose restrictions on others

Europe’s AI Act —

Lawmaker hails “world’s first binding law on artificial intelligence.”

Illustration of a European flag composed of computer code

Getty Images | BeeBright

The European Parliament today voted to approve the Artificial Intelligence Act, which will ban uses of AI “that pose unacceptable risks” and impose regulations on less risky types of AI.

“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases,” a European Parliament announcement today said. “Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.”

The ban on certain AI applications provides for penalties of up to 35 million euros or 7 percent of a firm’s “total worldwide annual turnover for the preceding financial year, whichever is higher.” Violations of other provisions have lower penalties.

There are exemptions to allow law enforcement use of remote biometric identification systems in certain cases. A European Commission summary of the legislation said:

All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.

Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

“Strict obligations” for high-risk AI

The AI Act was supported by 523 members of the European Parliament (MEPs), while 46 voted against and 49 abstained. The legislation classifies AI into four categories of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.

“High-risk AI systems will be subject to strict obligations before they can be put on the market,” the legislation summary said. Obligations include “adequate risk assessment and mitigation systems,” “logging of activity to ensure traceability of results,” “appropriate human oversight measures to minimise risk,” and other requirements.

The law drew opposition from the Computer & Communications Industry Association, a tech-industry lobby group.

“The agreed AI Act imposes stringent obligations on developers of cutting-edge technologies that underpin many downstream systems, and is therefore likely to slow down innovation in Europe,” the group said when a deal on the law was agreed to in December 2023. “Furthermore, certain low-risk AI systems will now be subjected to strict requirements without further justification, while others will be banned altogether. This could lead to an exodus of European AI companies and talent seeking growth elsewhere.”

The law will officially be on the books 20 days after its publication in the official Journal, the European Parliament announcement said. The law’s ban on prohibited practices will apply six months after that, but other regulations won’t take effect until later. The “obligations for high-risk systems” will only take effect after 36 months, the announcement said.

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said MEP Brando Benifei, the Internal Market Committee co-rapporteur. An AI office will be formed “to support companies to start complying with the rules before they enter into force,” he said.

Risky AI categories

Examples of high-risk AI include AI used in robot-assisted surgery; credit scoring systems that can deny loans; law enforcement that may interfere with fundamental rights, such as evaluation of the reliability of evidence; and automated examination of visa applications.

The limited-risk category has to do with applications that aren’t transparent about AI usage. “The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust,” the European Commission said. “For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable.”

AI-generated text that is “published with the purpose to inform the public on matters of public interest must be labelled as artificially generated,” and this requirement “also applies to audio and video content constituting deep fakes.”

AI with minimal or no risk “includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category,” the commission said. There would be no restrictions on this category.