Nouveauté ! Développez votre produit SxMD avec un eQMS structuré comprenant des modèles SxMD prêts à être audités et conformes aux normes EU et US. En savoir plus !
What is the AI Act? How does it apply to you?
What is the AI Act?
The European Commission first drafted the AI Act in April 2021 with the intention to set out a common regulatory and legal framework for Artificial Intelligence. The AI Act will set out rules across all sectors (the legislation is ‘sector-agnostic’). In this blogpost we attempt to analyze the implications of the final text of the AI Act and what the AI Act means for you as an organization that develops or implements an existing AI System in commercial products.
Key timelines of the AI Act
The final text of the AI Act has been agreed on between the European Parliament and the European Council on the 8th of December 2023. After the final agreement, the final text was leaked over LinkedIn providing us with a complete picture of the implications. The next steps left are to obtain a final approval vote from the European Parliament in April and finalize the legal text (e.g. finalize section numbering, correcting mistakes and translating the text). Once translated it will be published in the European Official Journal and the AI Act will consequently enter into force (on the 20th day following the publication). It is expected to enter into force in the middle of 2024, with a transition period of only two years. This means that all parties governed by the AI Act (manufacturers, deployers, importers, providers, authorized representatives) will need to fully comply with the rules set out in the AI Act within 2 years after the date of the legislation coming into force. Organizations who will be required to obtain CE marking (such as those set out in Annex II Section A) will be subject to a longer transition timeline of 3 years.
Prohibited AI
The first parts of the AI Act will be applied as soon as 6 months after the publication of the AI Act in the European Official Journal and the act coming into force. This means that any devices that apply prohibited AI practices will need to be taken off the market within those 6 months.
The prohibited AI practices include amongst others: ‘social credit scoring systems’, ‘emotion recognition systems at work and in education’, ‘AI exploiting people’s vulnerabilities’, ‘AI used to manipulate behavior’, ‘untargeted scraping of facial images for facial recognition’, ‘biometric categorisations systems using sensitive characteristics’, ‘predictive policing applications’ and ‘law enforcement of real-time biometric identification in public’.
Generative AI
Recently, there has been a vast increase in the application of Generative AI (for example: Open AI’s ‘Chat-GPT’ and Google’s ‘Bard AI; etc). Generative AI has the potential to affect large groups of people potentially introducing risks to jobs, privacy, copyright protection and even human life itself. The European Commission has therefore introduced additional rules for Generative AI after the initial released draft of the AI Act. These have led to fierce discussions between the European Parliament and the European Council. In the final trilogue discussions (which commenced at the 6th of December 2023) rules for Generative AI have eventually been agreed upon, and those will be applied as soon as within 1 year after the coming into force of the AI Act.
High-Risk AI
The AI Act differentiates between various levels of risk (‘Unacceptable’ - prohibited AI, ‘High-Risk’ - strictly regulated, ‘Limited Risk’ - transparency obligations and ‘minimal risk’ - not regulated). Timelines related to unacceptable risks (prohibited AI) and Generative AI are discussed above. For all other devices, and mainly the strictly regulated High-Risk AI there is a transition period of 2 years* to become compliant. Other organizational aspects of the AI Act may apply at an earlier date to facilitate the governance of the AI Act and the effective implementation.
*Timelines for products listed in Annex II list A are subject to a transition period of 3 years
High-Risk AI - Conformity Assessment
As explained under the previous section, there are different risk levels for AI devices. Article 6 of the AI Act clarifies the differentiates between the different levels of risk (displayed in Figure 1). AI systems that are governed by existing European legislation listed in Annex II Section A (e.g. the Medical Device Regulation 2017/745 and the In-Vitro Diagnostic Regulation 2017/746), and which undergo third-party conformity assessment under existing legislation are considered High-Risk by default.
Note: Class I medical devices do not commonly undergo third-party conformity assessment (there are exceptions), and therefore may under this rule not be considered as ‘High-Risk’.
Figure 1: Risk stratification within the AI Act, Image source: European Commission website
AI systems which are referred to in Annex III are also considered High-Risk, unless the output is purely accessory in respect of the relevant action or decision to be taken, and is not likely to lead to a significant risk of harm to the health, safety or fundamental rights or natural persons (subject to further criteria set out in Article 6).
Different requirements apply to High-Risk AI depending on how they are regulated in their sectors. Table 1 below intends to provide an overview of these requirements.
Table 1. Overview of conformity assessment for High-Risk AI
Assessment route | Applicable device requirements | Conformity assessment | Product examples |
---|---|---|---|
Annex II - Section A | Chapter 2 & 3 | Third-party assessment per Annex VII | Machinery, toys, personal watercraft, lifts, protection equipment in explosive environments, radio equipment, pressure equipment, cableway installations, PPE, appliances burning gaseous fuels, medical devices and in-vitro diagnostic medical devices |
Annex II - Section B | Article 53 & 84 | None | Civil aviation security, two or three wheel vehicles and quadricycles, agriculture and forestry vehicles, marine equipment, interoperability railway systems, motor vehicles and their trailers |
Annex III - point 1 | Annex III - point 1 | Self-certification per Annex VI* or third-party conformity assessment per Annex VII | Biometric and biometrics-based systems |
Annex III - points 2-8 | Chapter 2 & 3 | Self-certification per Annex VI* | Safety components in management operation of critical digital infrastructure, road traffic and supply of water, gas, electricity. AI systems used for education and vocational training, employment and workers management (e.g. HR recruitment and performance management software), Access to public services systems (e.g. software to evaluate creditworthiness), law enforcement systems, migration, asylum and border control management systems, administrative systems applied in justice |
Title VIIIA - - General-purpose AI system | Title VIIIA Chapter 1 & 2 | Compliance with code practice / harmonized standards High-impact capabilities: additional | Classification per Article 52 of the AI Act, where it may be classified as ‘high impact capabilities’. A threshold of 10^25 FLOPs is introduced for high impact capabilities. |
More information about 10^25 FLOPs.
*Note: The self-certification route through application of Annex VI depends on the availability of harmonized standards and or common specifications, without harmonized standards, a provider will be required to undergo a third-party conformity assessment.
There is no or little purpose to discuss the requirements for the other classes of risks, since it is either forbidden to bring such devices to market, or the requirements are limited, such as compliance with transparency obligations (e.g. Article 52) only for limited-risk AI.
Implementation of the AI Act in your organization
As a manufacturer of a device being an AI system or using an AI system as a technology supporting the device, or the AI system being a safety component within the device, there are steps you can already undertake.
Assess whether your device will be impacted by the AI Act
The definition of an AI system to be used by the AI Act is already made publicly available and is based on the OECD definition. The definition is provided below:
“An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. ”
To determine whether your system may qualify as AI, the following needs to be considered: , a) Is the device a machine-based system, b) does it infer outputs such as predictions, content, recommendations or decisions which c) can influence physical or virtual environments.
If your device is considered an AI system assess the indicated risk
The forbidden practices are clarified in the final text. If your device or parts of the device are considered forbidden practices, your organization may need to reconsider its strategy for the European Market.
The AI system categories for High-Risk AI devices are clarified in Article 6, certain applications that fall under Annex III may no longer be considered ‘High-Risk’ in the final text if the criteria are met. All devices covered by Annex II Section A will by default be considered ‘high-risk’ if they include an AI system. In addition, Article 52 clarifies how General Purpose AI (GPAI) models are classified, and what requirements apply to such models (including high-impacting GPAI models).
If your device is considered High-Risk start informing Management and monitor closely
If your device is considered a ‘High-Risk’ AI system or GPAI according to the AI Act, determine what set of requirements need to be implemented per the AI Act. Depending on the nature of the AI System, the requirements set out in Title III (Chapter 2 and 3) will apply to your device, or the requirements for GPAI apply set out in Articles 52 (a) classification (c) GPAI and (d) Systematic Risk GPAI. This does not apply to devices listed in Annex II, List B. In addition thereto you may be required to implement harmonized standards or common specifications , and or obtain CE marking through a Notified Body.
Harmonized Standards
Upon publication in the European Official Journal, the European Commission will send a final Standardization request, replacing the current draft request, to CEN/CENELEC requesting to bring forward standards that allow demonstration of compliance against the AI Act’s requirements for High-Risk AI devices. Firstly, it is advised to monitor these activities, and to monitor the standards program of JTC 21 who will address the standardization request, and secondly, you may want to monitor what other standards applicable to your sector are under development. Once it becomes clear that standards are in the process of being harmonized, start implementing those standards to ensure timely compliance (e.g. regarding quality management requirements, risk management and technical documentation).
In the field of medical devices, there are guidance documents (ISO 24971-2) for risk management of AI-enabled medical devices under development, PT 63450 for the testing methods applicable to AI under Test (AIuT) and a standard for the performance evaluation of AI-enabled medical devices PT 63521. It is uncertain whether these will support demonstration of compliance against the AI Act.
Notified Bodies
When you will need a third-party conformity assessment by a Notified Body (e.g. devices covered by Annex II Section A), it is essential that you start discussions with Notified Bodies as soon as possible. The AI Act is expected to be published mid 2024, with a very short implementation period of 2to 3 years (depending on whether the device is covered by Annex II section A). This will mean that your device will need CE marking by 2026, or 2027 for High-Risk AI devices listed in Annex II t Section A (such as medical devices).
Notified Body capacity is not a given and that it will take time for Notified Bodies to organize themselves to be able to grant CE marking in the first place. In addition, if harmonized standards or common specifications will be absent, the AI Act may default all annex III (point 1) High-Risk AI to Conformity Assessment with a Notified Body
With such a short transition period, it may not be feasible to grant CE marking to all High-Risk AI systems before the end of the transition period. In the Medical Devices field we have seen multiple extensions to allow such transition when the MDR and IVDR were introduced.
Internal Organization
Last but not least, the implementation of the requirements for a High-Risk AI system will require resources. Resources that 1) understand what regulations demand, e.g. with regards to risk management, quality management and technical documentation requirements, and 2) understand how AI systems function. Having resources available that address both these aspects will be crucial in your strategy to become compliant with the AI Act. Therefore start informing your management team today, put in resources within your budget for becoming compliant in 2024, and resources needed for obtaining CE marking.
Start preparing now
Although the final text is still to be published in the Official Journal, and is subject to voting by the European Parliament, you can start already on your road to compliance, there are steps you can take. For example,
Start documenting risks associated with the use of AI in a structured manner, and what risk mitigations you have put in place, e.g. to address bias risks, risks to vulnerable groups, or risks associated with data drift;
Start implementing Quality Management System documentation in a structured way, for example, have policies in place on how to manage data, and start documenting details about data you make use of, e.g. origin of the data, representativeness of the data, lawfulness of the acquisition, and use of the data;
Start documenting how the AI system has been developed (e.g. AI specific requirements, risks and tests cases) and document your design considerations in technical documentation as required by Annex IV;
Start documenting what information you wish and most likely will need to convey to your users (in an IFU);
Implement measures that facilitate transparency of your AI system, human oversight measures and logging of the events associated with the AI system;
The information in this blogpost attempts to provide you insight in the basics of the AI Act, and what you can already do as a manufacturer of an AI system to become compliant with the upcoming regulatory framework. It should be noted that the final text is yet to be published, standards are still to be developed, and guidance is yet to be provided.
The information in this blogpost attempts to provide you insight in the basics of the AI Act, and what you can already do as a manufacturer of an AI system to become compliant with the upcoming regulatory framework.
Timelines are short, missing the given deadlines might put your organization at risk.
To learn more about our author, Leon Doorn, connect on linkedin.