May 17, 2023

AI Act Amendments (Joint committee vote 11/5/23)

Contributors
No items found.
Subscribe to newsletter
Share this post

It has been more than two years since the first draft of the EU AI Act has been published. Since then, we have seen technology move forward at an ever increasing speed, with innovations such as generative AI and foundation models gaining public attention and being applied in real-world AI solutions. Regulators have also been moving forward, and there have been significant evolutions of the proposed regulation. In light of the joint committee vote in the EU parliament last Thursday (11. May 23), a short synthesis of some of the, in our opinion, most interesting amendments in last week’s proposal:

Prohibited & high risk applications

The AI Act groups applications into 4 risk categories. In the latest proposal, some additional prohibited practices have been introduced. For example, emotion recognition will be fully banned in law enforcement, education, workplace & border management. 

The list of high risk applications has also been extended. For example, recommender systems used by social media platforms will be classified as high risk due to their scale and potential to influence public opinion. Obligations for high-risk systems have also been extended, with additions such as a fundamental rights impact assessment and information on energy consumption within the technical documentation.

Foundation models & Generative AI

One of the major additions are provisions on foundation models (such as GPT or Stable Diffusion), which, back when the first draft of the AI Act was published in 2021, have not existed yet. AI Foundation model providers will need to mitigate reasonable foreseeable risks, implement data governance measures, assess bias & provide documentation. Providers of Generative AI foundation models will additionally need to declare that an output is AI generated, and make a summary of the use of training data protected under copyright law publicly available.

Open-source

Unlike in the first draft from 2021, open-source is now explicitly mentioned in the amended proposal. The requirements stated above for foundation model providers are also applicable to open-source foundation model providers. This has already given rise to a public debate on the implications for open-source in Europe, if adopted. Aside from foundation models, the current proposal now states that the regulation does not apply to open-source AI components, except if put on the market as part of a high risk system.  

After another vote in the parliament in June, this will be further discussed and adapted in trilogues, and is expected to come into force no earlier than 2025. At the same time, we believe that companies cannot wait until the AI act is final and in force. Those who want to play an active role in the trustworthy and positive use of AI need to act now and start implementing processes and practices to account for the upcoming regulation as well as for their own values on ethical and robust AI already today. 

Related posts

View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Foundation Models
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps
Generative AI
Natural language processing
Computer vision