AI Act - Moving forward

Movements within the AI Act (Artificial Intelligence Act) in Europe

At the beginning of the year, we published an article on the AI Act's development, shedding light on the advancements in shaping the regulatory framework for artificial intelligence. Fast forward to June, and the European Parliament unveiled their amendments to the draft text initially proposed by the European Commission. The stage was set for trilogue negotiations—a crucial phase where representatives from the Parliament, the Council, and the Commission converge to forge a consensus. In this blog post, we revisit the unfolding narrative of the AI Act, now the negotiations have come to an end, exploring the amendments that could redefine the future of AI regulation in Europe and beyond.

Join us on a journey as we explore the latest developments that hold profound implications for the world of artificial intelligence.

AI Act and the result of the trilogues

The AI Act aims to establish a legislative framework safeguarding fundamental rights, democracy, the rule of law, and environmental sustainability from the potential risks associated with high-risk artificial intelligence. The goal is to strike a balance between protection and innovation, ensuring that advancements in AI are made responsibly and in compliance with the principles of democracy and fundamental rights by adopting a risk-based approach.

Based on the risk category, different requirements are set forth for AI systems. As mentioned in our previous post, medical devices containing AI would fall under the high-risk systems. This means that they need to comply to the MDR as well as fulfill the requirements for high risk AI systems under the AI Act.

A few of the results that came out of the trilogue negotiations are the following:

  • Certain systems will be banned

    • biometric categorisation systems that use sensitive information;

    • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;

    • emotion recognition in workplace and educational institutions;

    • social scoring based on social behaviour or personal characteristics;

    • AI systems that manipulate human behaviour to circumvent their free will;

    • AI used to exploit the vulnerabilities of people.

It's very important to emphasize that the prohibition outlined in the AI Act will take effect just six months after the legislation comes into force. This imposes a rapid timeline for companies with products falling into the specified category, meaning that businesses will need to have a swift and strategic decision-making approach to ensure compliance.

  • High-risk AI systems must be designed and developed to manage biases effectively, ensuring to be non-discriminatory and respect fundamental rights. 

  • Documentation of high-risk AI systems must include records of programming and training methodologies, data sets used and measures taken for oversight and control.

  • Human oversight is required for high-risk systems.

  • Sanctions have been defined from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the infringement and the size of the company.

What is next for the AI Act in Europe

The upcoming phases of the AI Act evolution involve the drafting of the final text, incorporating the outcomes of negotiations, before it is officially published. As the AI Act is anticipated to be released in 2024, the AI Act provides a two-year transition period, making it applicable even before the extended deadlines for the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR). Manufacturers of AI systems face a time crunch however, meaning they'll need to quickly analyse the text, allocate resources, and prompt implementation of necessary changes to ensure timely compliance with this impending legislation. It's why companies such as Limbus AI choose Matrix Requirements to help with their compliance needs and how Limbus AI saves 95% time spent on test cases while releasing 5000 test cases at each new release cycle.

Matrix Requirements is a global leader helping innovative AI (Artificial Intelligence) companies remain focused on developing safer products faster. MatrixALM & MatrixQMS help to reduce the regulatory burden & ensure quality across the entire AI product lifecycle. Furthermore, we strive to stay on top of all changes related to this first-of-its-kind legislation. Stay tuned with our AI Act blog series, written by Leon Doorn. You can find the first article here to learn more about the AI act in Europe!

About the Author
Ann Vankrunkelsven
RA/QA Manager