May 12, 2022

EU AI Act : what you need to know

Contributors
No items found.
Subscribe to newsletter
Share this post

The following article is an abbreviated version of the article "Hoe staat het met het voorstel voor de AI Act" by Agoria. Read the original article by Agoria here (in Dutch)

What about the AI act proposal?

On April 21, 2021, the European Commission proposed the ‘AI Act’, a proposal for a regulation for Artificial Intelligence. Today, almost a year after the publication of this proposal, there are a number of evolutions in Parliament and the Council. The AI Act is currently expected to come into effect in early 2025. 

The AI Act in a nutshell

The AI Act is the first European-level regulation specifically aimed at artificial intelligence. The Commission realises that not all AI systems need to be extensively regulated and therefore takes a 'risk-based approach' to the legal framework. This means that the (potential) risk of an AI application determines which rules will apply. The proposal lists 4 categories, each with an associated set of rules and requirements: “low and minimal risk”, “limited risk”, “high risk” and “unacceptable risk”.

The proposal is currently in the “trilogue phase”, with informal meetings between representatives of the European Parliament, the European Commission and the Council. All elements of the proposal are re-examined since it has provoked many divergent reactions. When the trilogue is ended, the three bodies will bring their proposals together to finalise the AI Act. Afterwards a transition period of presumably 24 months will follow.

Current evolutions within the European council

Among the current proposed adjustments by the European council are a rewriting of the definition of AI. A few other topics were updated, such as for example:

The types of risks of an AI system as well as risk management were rewritten (Art. 9). In terms of data management, potential biases in data sets are now defined as “that could affect the health and safety of individuals or lead to discrimination”. As datasets are rarely error-free or complete, the term “as good as possible” was added to soften the provisions. Lastly, in the absence of training of an AI system, the provisions only now apply to datasets used for testing (Art. 10). 

The article about corrective actions was also updated: When an AI system is found to be non-compliant by users, the providers must work with those users to investigate the issue (Art. 21). The intricacies of human oversight were rephrased (Art. 29).

The articles concerning how and for how long to keep documentation and logs were also updated (Art. 11-14, 16-18, 20). Article 53 and 54 describe how sandboxes should contribute to a list of objectives as well as their financial conditions and various regulations around coordination and participation in sandboxes. Lastly, new articles were added to describe how to test an AI system in sandbox or real-world conditions, how to supervise it and obtain consent of test subjects.

For a more complete and extensive list of the proposed adjustments, please refer to the original article by Agoria here (in Dutch)

 

Related posts

View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Foundation Models
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps
Generative AI
Natural language processing
Computer vision