Essential Templates for AI Risk Management within your QMS

Managing Medical Device software risks for AI effectively requires a structured approach. To standardize this process, we created a flexible AI Risk Management template. This template is primarily based on the EU AI Act, with guidance from ISO standards. Given that ISO standards are not yet harmonized with the EU AI Act, our approach ensures compliance with current regulations while integrating best practices from established risk management frameworks. These standards offer valuable insights and methodologies for risk management.

Comparing ISO 14971 with the EU AI Act/ISO 23894/ISO 31000

To understand the differences and unique aspects of these standards, let’s compare ISO 14971 with the EU AI Act, ISO 23894, and ISO 31000.

ISO 14971VSAI Act/ISO 23894/ISO 31000
Focus: Medical Devices Focus: Fundamental rights, health and safety, and the environment
Risk Control Measures: Determination of appropriate measures to reduce risks to an acceptable levelRisk Treatment Options: Includes determination of risk treatment options with an option to increase risk to pursue opportunities
Objective: Primarily focuses on minimizing health and safety risksObjective: Broader focus that includes protecting fundamental rights and the environment, along with health and safety

Key Differences

Scope of Focus

  • ISO 14971: Concentrates solely on health and safety, crucial for Medical Devices and related technologies.

  • EU AI Act/ISO 23894/ISO 31000: Expands the focus to include fundamental rights and environmental considerations, reflecting a more holistic approach to risk management.

Risk Management Approach

  • ISO 14971: Emphasizes risk control measures to ensure risks are reduced to acceptable levels.

  • EU AI Act/ISO 23894/ISO 31000: Introduces risk treatment options, allowing for a balanced approach that considers increasing risks for potential opportunities, provided they are managed responsibly.

By incorporating these diverse perspectives, our AI Risk Management template provides a comprehensive, adaptable framework that aligns with the latest regulatory requirements and industry best practices.

Understanding the Risk Management process for Medical Device software

The risk management process for Medical Device software is critical to ensure the safety and effectiveness of the software in a highly regulated environment. Here's a detailed look at the steps involved:

Determine Objectives and Risk Sources

Objectives: Establish clear objectives for the AI system, focusing on patient safety, regulatory compliance, and functionality.

Risk Sources: Identify potential risk sources, such as software bugs, cybersecurity threats, user errors, and environmental factors.

Identify Risks and Select Control/Treatment Options

Identify Risks: Conduct a thorough risk assessment to pinpoint specific risks associated with the SaMD. Use tools like Failure Modes and Effects Analysis (FMEA) or Fault Tree Analysis (FTA).

Select Control/Treatment Options: Develop risk control measures to mitigate identified risks. Options might include software updates, user training, or adding redundant systems.

Evaluate Effectiveness of Risk Controls

Assign Risk Owner: Designate a risk owner to oversee the implementation of risk controls.

Check Effectiveness: Continuously monitor and evaluate the effectiveness of risk control measures through testing, feedback, and performance metrics.

Decide on Risk Acceptance and Ongoing Monitoring

Risk Acceptance: Decide whether the remaining risk is acceptable. If not, re-evaluate control measures.

Ongoing Monitoring: Continuously monitor risks and control measures, updating the risk management plan as needed. Regularly review and analyze data to ensure ongoing compliance and safety.

Regulations, standards, and guidance referenced in the AI Risk Management template

The templates provide a practical demonstration of how a risk management system can be configured, organized, and utilized by organizations that involve high-risk AI systems. It includes identification, analysis, and treatment of the risks that a high-risk AI system could pose to fundamental rights, health and safety, and the environment when used according to its intended purpose.

Primary regulations and standards

  • EU AI Act - EU Artificial Intelligence Act (position of the European Parliament adopted at first reading on March 13th, 2024. 

  • ISO 31000:2017 - Risk management - Guidelines

  • ISO/IEC 23894:2023 - Information technology - Artificial intelligence - Guidance on risk management. 

Supporting regulations and standards

  • Charter of Fundamental Rights of the European Union (2012/C 326/02)

  • European Declaration on Digital Rights and Principles (January 26th, 2022)

  • Ethics guidelines for trustworthy AI developed by the independent high-level expert group on artificial intelligence (AI HLEG) appointed by the European Commission

  • ISO/IEC 42001:2023 - Information technology - Artificial intelligence - Management system

  • ISO/IEC TR 24027:2021 - Information technology - Artificial intelligence (AI) - Bias in AI systems and AI aided decision-making. 

What does the AI Risk Management template Matrix offers include?

The AI Risk Management templates refer to tools and practices for proactively protecting organizations and end users from specific AI risks. It includes categories such as legislation, guidance, objectives, risk sources, risks, risk controls, test cases, testing, standard operating procedures, and reviews. To minimize the learning curve, Matrix Requirements maintains the same structure as other templates offered.

Below is an overview of the categories and templates included out-of-the-box: 

AI Risk Management Categories

Legislation

Provides a structured list of relevant legislation. You can add other legislation that your organization must comply with. This category allows for a gap analysis of relevant legislation related to AI risk management systems.

Guidance

Offers a structured list of standards and frameworks that guide the effective implementation and integration of AI risk management into your organization’s AI-related activities and functions. Similar to the legislation category, you can add other guidance, standards, and frameworks as needed, and perform a gap analysis.

Objectives

This category includes objectives specific to AI systems, divided into three sub-categories: lawfulness, ethical objectives, and reliability and robustness. These objectives cover compliance with laws and regulations, adherence to ethical principles such as fairness and privacy, and ensuring technical reliability and robustness.

Risk sources

Includes predefined risk sources based on ISO/IEC 23894, with inputs for identifying risk sources linked to legislation, guidance, objectives, and associated risks. The sources of risk can be expanded depending on the nature of the AI system.

Risks

Covers risks based on Article 9 of the AI Act, including intended purpose risk, emerging risks, and post-market monitoring risks. The template includes formulas to calculate initial and residual risk scores, and offers risk treatment options that can be customized.

The template includes pre-set risks with formulas to calculate the initial risk score which adds another layer to a standard risk formula in order to include Risk Source. And, once you’ve calculated your initial risk, you can move on to the risk treatment options and add your risk control to calculate the residual risk which may be higher or lower. The template comes with 8 risk treatment options out-of-the-box with the ability for you to modify or add new ones that better align with your business use case . 

Risk controls

Allows you to add the status of risk control application, including actions to be taken, resources required, expected execution dates, proof of actions taken, and additional actions to change the likelihood or probability of harm.

Test cases

Documents the results of testing selected test cases, including descriptions, versions, testers, test dates, test run results, test case steps, and comments with links, images, or code.

Test execution

The test execution or XTC category is intended to show the result of the testing on the selected test cases. Like standard XTCs they include a description, version, tester, test date, test run results, test case steps, and a comment field where you can include links, images, or code or anything else you would like to add to prove the test case. 

Document templates

The AI Risk Management template includes pre-structured templates for managing AI-related risks, such as:

  • Fundamental Rights Impact Assessment (FRIA) - Ensures that your AI system respects fundamental human rights, aligning with regulations like the EU AI Act.

  • Risk Sources Specific to the AI System - Identifies potential risks unique to your AI system, allowing for targeted risk mitigation strategies.

  • AI Policy - Establishes a clear framework for AI usage within your organization, ensuring all stakeholders understand the guidelines and compliance requirements.

  • AI Risk Management Report - Provides a detailed analysis of identified risks and the measures taken to address them, fostering transparency and informed decision-making.

  • AI Risk Treatment Plan - Offers a structured approach to treating and managing risks, including options for risk reduction, acceptance, or even taking calculated risks to seize opportunities.

  • Roles and Responsibilities - Clearly defines who is responsible for each aspect of risk management, ensuring accountability and effective implementation of risk controls.

By leveraging pre-structured document templates, you save time and resources since you don’t have to start from scratch. Your organization can be confident that you’ve covered all aspects of risk management, worked with templates aligned with standards like the EU AI Act and ISO guidelines to ensure compliance, and provided clear and consistent documentation that helps teams better understand and manage AI risks.

Who are the AI Risk Management templates for?

The EU AI Act is applicable to any business involved in making high-risk AI systems. Due to the AI Act, businesses are required to conduct conformity assessments, one of which is the risk management system applicable to all organizations, not only Medical Device companies.

These templates are relevant for:

  • Providers - Those developing the system.

  • Users - Referred to as “deployers.”

  • Importers - Entities bringing AI systems into the market.

  • Distributors - Those distributing AI systems.

  • Manufacturers - Companies manufacturing AI systems.

Getting started with AI Risk Management 

The template includes an AI Risk Template Manual with instructions for setting up and using the template, ensuring correct usage. It also includes helper text for areas that need customization with your own information and examples to illustrate the required level of detail for compliance.

To learn more about the AI Risk Management Templates for Medical Device software, contact us to request a demo with a product expert who can show you how Matrix Requirements can help you  get your Medical Device to market faster. 

About the Author
Heather Laducer
Product Marketing Manager