Managing Quality under the AI Act

The heaviest implications of the AI Act on providers of High-Risk AI systems are associated with the preparation of the Technical Documentation (discussed in the previous blogpost), the execution of a Conformity Assessment procedure (for AI systems covered by Annex II Section A per Article 43.3) and the implementation of a Quality Management System (QMS)

For organizations covered by Annex II Section A they might already be subjected to the need of implementing a Quality Management System, the requirement may be new however for providers covered under Annex III.

As set out in recital 54 and Article 17.2a (likely to become 17.3) of the final leaked text which was approved by the Council last February 2nd, the Act allows for organizations covered by Union harmonized legislation set out in Annex II Section A (e.g. the Medical Device (MDR) and In-Vitro Medical Device Regulation (IVDR)) to decide on how compliance is demonstrated to avoid unnecessary administrative burden or costs. The European Commission will provide further guidance to clarify the interaction between standards covered by the AI Act and standards used to demonstrate compliance under Annex II Section A legislation (e.g. the MDR and IVDR). For such organizations this may come as good news, since it clarifies that the QMS requirements may be integrated into the existing Management System (e.g. ISO 13485), and potentially allows for a single audit of the QMS to demonstrate compliance against the regulation. 

Similar to Technical Documentation, the final text clarifies (recital 74a) that for SME’s and micro enterprises it may be sufficient to implement a Quality Management System in a simplified manner. The European Commission is tasked to develop guidelines to specific elements of such a simplified Quality Management System. 

What the AI Act expects in terms of Quality Management

For all organizations who provide AI Systems (‘providers’) the implementation of a quality management system with Article 17 is mandatory. The definition of a provider is as follows, and covers more than product manufacturers: 

‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge; 

Similar to other Quality Management System requirements, the AI Act requires providers to implement written Policies, Procedures and Instructions. The aspects displayed in Table 1 should minimally be covered by the Quality Management System. Where such procedures, policies and instructions are implemented, these should set out an accountability framework clarifying the responsibilities of the Management and members of staff.

Table 1. Overview of QMS requirements

Pre-marketPost-MarketContinuous
Strategy for Regulatory ComplianceReporting of Serious IncidentsDocument and Record control
Design Control and Verification proceduresCommunication with Authorities*Data Management procedures
Technical Specifications (e.g. harmonized standards)Quality Control and Assurance procedures*Risk Management procedures
Examination, Test and Validation proceduresPost-Market Monitoring proceduresResource Management (including security of supply)

*Important in both Pre- and Post Market activities

As noted under the introduction, the QMS needs to be proportionate to the size of the provider’s organization. For SME’s and micro enterprises a simplified manner of a QMS may be sufficient, as yet to be clarified in guidance by the European Commission. 

Quality during design and development

As can be observed from Table 1 the Quality Management System will lay down requirements for the AI System from the early research stages pre-market up to post-market and decommission. As an organization it will be crucial to understand and define the full life cycle stages of your AI system refer to (IEC 5338 for high-level guidance). 

When the first development starts of an AI algorithm, organizations will need to ensure that they capture decisions that are made. Annex IV (section 2.b) for example requires organizations to document:

  1. the key design choices including the rationales and assumptions made, including; 

  2. the main classification choices, and 

  3. what the system is designed to optimize for.

Within software as a medical device (SaMD’s) companies the official ‘software system’ design and development process usually starts when there is a working prototype, or when these organizations start documenting the user requirements. When an AI model is part of such a ‘software system’ it could be that Data Science teams have already developed a working AI model. In such circumstances, the key design choices in the AI Model Development may not logically be delivered out of the Software Development process. Learn more about the AI Act on Medical Devices in our recent video.

Figure 1. Software V-model (per IEC 62304, IEC 82304 & IEC 62366)

The image in Figure 1 presents a basic outline of the traditional V-model used in software development per IEC 62304, it may vary from one interpretation to another. Obviously this may not reflect how software development is done in today’s practice (e.g. according to Agile practices), yet the general concepts of documenting requirements and executing verification and validation tests remains the same. In organizations that implement AI models, data science teams may continuously train and test AI models. Some AI models will never see the light of day. For example, where an AI model is incapable of passing performance acceptance criteria due to insufficient data being available. 

Organizations that develop ‘software systems’ should determine whether their software system development processes are sufficient to cover all AI model needs to demonstrate compliance with the AI Act. Data science teams that develop AI Models will need to be incorporated into the development processes. AI Model development should either embedded in the ‘software system’ development process, or as a ‘self standing’ AI model development process. For example, Figure 2 presents a separate AI model development process, where only ‘accepted’ AI models enter into the ‘software system’ development process. 

Figure 2. Software W-model (for the purpose of this blogpost)

In this example, organizations should consider in the AI Model Development process the AI model’s intended use (which may not align with the overall software intended use), AI system risk management (bias risks), data management (collection, limitations etc), design control, specifications (including harmonized standards), AI testing and validation (intended AI performance) procedures, separate Model Versioning and so on. It is strongly recommended to implement AI model development needs into development processes sooner rather than later to demonstrate compliance with the upcoming AI Act, which just came one step closer with the final acceptance by the European Council. 

There are pro’s and con’s to the separation of AI Model Development from the Software Development processes. For example, the AI Model Development might be executed by different teams (e.g. Data Science vs Software Development), and software development would only start when there is an accepted AI Model. At the same time, it could introduce a lengthier development process, compared to a single development process.

Quality in the post market stages

The Post Market Monitoring requirements set out in the AI Act add procedures that are familiar to most companies, such reporting procedures to the relevant regulatory authorities. Such procedures are already required under the GDPR 2016/679 for example for Security breaches, and under the MDR / IVDR for reporting safety related incidents and field safety corrections, such as product withdrawals. 

What may be new to the organization is the need to implement post market monitoring procedures and Quality Control and Assurance procedures. With regards to the post market monitoring requirements, organizations will need to consider the requirements set out in Article 61 (post market monitoring by providers). The AI Act demands that the European Commission publishes a template for a plan for post market monitoring and the list of items to be covered therein.

Continuous learning systems

Another important aspect to be considered applies specifically to AI Algorithms that continue to learn after their release on the market. As clarified in Annex IV, Section 2(f) for such systems the provider will need to clarify the technical solutions adopted to ensure continuous compliance. Such technical solutions for continuous learning systems should be clarified in the quality control and assurance procedures which ensure that the AI does not exceed limits specified within the predetermined changes. This seems to largely align with the Predetermined Change guidance set out by the FDA for medical devices. 

Parts of this process, or maybe the process in full could be automated depending on the availability of clear Quality Controls and procedures as displayed in a simplified overview in Figure 3

The keeping up-to-date of the Technical Documentation obviously should further consider traceability of the AI System’s performance, including its versioning, release and implementation dates, use of the AI system etc. These data should irrespectively be logged (automatically) as required by Article 12 of the AI Act. Scuh data should be considered official records under the scope of the Quality Management System.

Figure 3. Devices which continue to learn

Quality standards under the AI Act

Today there are no specific Quality Management System standards which are supportive of the upcoming AI Act. The European Commission has been tasked to request harmonized standards from standardization committees. The first draft of the Standardisation Request (M/593) was aimed at CEN/CENELEC, who have accepted the request. Within the request the development of standards for Quality Management Systems (including post market monitoring) was included. A new Standardisation Request will be issued upon the final voting and approval of the AI Act, this item is not expected to change. In the draft request the European Commission specifies that such quality management system standards should allow integration in the existing quality management systems currently supporting the legislation in Annex II Section A

In response, CEN/CENELEC’s JTC 21 has prepared a work program (be aware, this may no longer be up-to-date) to address the standardization request. The initial proposal of JTC 21 was to propose the ISO/IEC 42001 Management System Standard in combination with the ISO/IEC 27001 Information Security Management System standard for demonstrating compliance with the Quality Management System Requirements of the AI Act. 

ISO 42001 / ISO 27001 

While both standards clearly are ‘Management System’ standards which allow certification by an independent body, it is questionable whether these standards are capable of addressing the requirements set out in the AI Act and the Standardisation Request. There are clear concerns with regards to this proposed combination:

  1. ISO/IEC 27001 and ISO/IEC 42001 are not quality management system standards, e.g. one is to address trustworthiness and responsibility and the other addresses Information Security;

  2. The standards lean on a different Risk Management Methodology as explained in ISO 31000 which considers a risk to be ‘a deviation from an objective’ which can be ‘positive’ or ‘negative’, whereas the AI Act implements the concept of Safety Risk Management ‘combination of the probability of occurrence of harm and the severity of that harm’;

  3. Both standards are freely allowing organizations to implement controls set out in their ‘Annex A’, which allows organizations to implement a non-compliant Management System that does not support the demonstration of conformity with the AI Act;

  4. Last but not least, due to the differences in these standards it is more complicated to combine them in a single certification audit e.g. with ISO 13485 for the Medical Device industry, increasing the costs for such activities and therefore do not directly meeting the requirement set out by the European Commission in the Standardisation Request.

The medical devices community has already previously threatened to leave the ISO community should ISO 13485 be required to follow the same set up as ISO 27001 and ISO 42001.

The consequences of this work programme proposition of the JTC 21 are yet to be seen, however it is expected that these standards will not be considered adequate to demonstrate compliance for the reasons pointed out above and as such may not be harmonized. As set out in Article 43.1(a) of the AI Act on conformity assessment, High-Risk AI under Annex III may self-declare conformity (per Annex VI) if they demonstrate compliance with harmonized standards or common specifications. In the absence of such standards or common specifications, Article 43.1 defaults back to option (b), meaning the involvement of a Notified Body. 

It is crucial for JTC 21 to develop those standards in time to avoid (i) EU-specific common specifications, which may require deviations for EU Quality Management Systems, or (ii) all organization developing High-Risk AI (per Annex III - Point 1) to default back to 43.1 option (b), which may require a larger Notified Body capacity than just those devices covered by Annex II Section A. Similarly, organizations who rely on harmonized standards to demonstrate compliance under the regulations of Annex II Section A may be impacted by the lack of harmonized standards, and would require Notified Body Conformity assessment. Their transition timeline is longer (3 years versus 2), potentially reducing the risk of the absence of harmonized standards.

As set out in this blogpost, it is important to start implementing a Quality Management Systems for providers of a High-Risk AI System. For those organizations that already have implemented a Quality Management System, they will need to start upgrading their Systems right now. With the recent approval of the European Council of the final text, it only depends on the final voting from the European Parliament prior to being implemented into the Official Journal. The AI Systems that are being developed today, will need to be able to demonstrate compliance with the AI Act within 2 years after the AI Act comes into force. 

If your AI System falls under Annex III, consider the lack of Harmonized Standards today a risk to your organization, as the current standards under development may not be harmonized potentially resulting in the need to obtain CE marking with a Notified Body as well, which will open you up to regulatory scrutiny by default.

About the Author
Leon Doorn
Independent Consultant