Risk Management under the AI Act

Artificial Intelligence (AI) introduces risks, risks that are new and risks that can affect the human population at a large scale. Industry leaders even have warned of the risk of AI leading to extinction of the human population. An extensive report of the Center for AI Safety (CAIS) suggests the use of malicious use of AI (e.g. in bioterrorism), an AI military race (e.g. automated warfare, cyber warfare), Organizational risks (e.g. accidents), AI going rogue (e.g. proxy gaming, goal drift) and many more risks. Learn more in our AI Act on Medical Devices video.

As such it should not come as a surprise that the European Commission proposed the AI Act which mainly aims to address risks posed by AI systems. In parallel the United States have recently published the AI Executive Order on the Safe, Secure and Trustworthy Development and Use of AI. The purpose of the order is mainly to mitigate its substantial risks.

AI risk classification

The first risk mitigation strategy proposed by the AI Act has been previously discussed in our blog posts, which is the risk stratification of AI systems into ‘Prohibited AI’, ‘High Risk’, ‘Limited Risk’ and ‘Minimal Risk’. Depending on the classification, different rules apply. Clearly, a number of the prohibited AI systems are those that could pose unacceptable risks, such as systems that apply cognitive behavioral manipulation, emotion recognition in the workplace and educational institutions, or some cases of predictive policing.

In this blogpost we further solely focus on High Risk AI systems which will be subjected to the application of Risk Management within the AI Act.

AI Act risk definitions

One aspect that should have caught the attention is the lack of a definition of ‘risk’ in the first draft by the European Commission, and the consolidated text set out by the European Council. In unofficial communication the European Commission clarified the definition of ‘risk’ is already defined throughout the New Legislative Framework (NLF) and repetition of that definition would be superfluous.

Nevertheless, requests were made to the European Parliament to add the definition of risk, and consequently the definition is now included in the final text as:

'risk’ means the combination of the probability of an occurrence of harm and the severity of that harm;

This definition aligns with the definition of Safety Risk Management per ISO Guide 51. ISO Guide 51 serves as a basis for all safety risk management standards applicable to the industries noted (e.g. ISO 14971 for the Medical Device industry) in Annex II Section A of the AI Act.

Article 9 on Risk Management Systems

For those industries affected by the AI Act (listed in Annex II Section A), and for whom the AI Act classifies their devices as ‘High Risk’, the concept of risk management is often one that already exists.

For example, within the Medical Devices Regulation (2017/745), the In-Vitro Medical Devices Regulation (2017/746) the Machinery Regulation (2023/1230), the regulation of Cableway installations (2016/424) similar requirements for Risk Management systems are addressed. These Regulations primarily aim at reducing safety-related risks. Per Article 9.9 of the AI Act the risk management system should be integrated into existing risk management procedures of those sectors to the extent Article 9 is fulfilled. 

AI systems listed in Annex III as High Risk however may not be subject yet to existing safety risk management requirements. For example, systems that are used for educational purposes (e.g. online training models, also those evaluating learning outcomes), systems used in the context of employment (e.g. recruitment software), systems that verify creditworthiness of natural persons etc. For these types of systems the safety risk management requirements may be entirely new.  

Article 9 requests organizations to consider risks to the health and safety of natural persons for all High-Risk AI Systems. In addition however, it also demands risk assessment regarding the 1) the impact on fundamental rights, and 2) the environment(albeit the environment in mentioned solely in Recital 28(a) as an inherent aspect of considering ‘fundamental rights’ implications in the final AI Act text’) These are in addition to the requirements set out in the existing regulation, and will have to be dealt with to demonstrate compliance against Article 9.

Fundamental rights

The assessment of risks to fundamental rights (including equal access and opportunities, democracy and rule of law) may seem straightforward, but in practice this could be a hard requirement to tackle. Each member of the European Union is bound to a set of basic principles according to Article 2 of the Treaty of the European Union. These include:

  • Respect for human dignity

  • Freedom

  • Democracy

  • Equality

  • Rule of law, and

  • Respect for human rights

Member states may individually have broader sets of fundamental rights, and the thresholds for human rights may differ. For example, in regards of discrimination some countries explicitly include people with disabilities (e.g. Germany), where others do not (e.g. Netherlands). These differences may seem insignificant, but can actually affect how risks are to be addressed per member state.

For example, during the COVID pandemic, it became clear that member states have different thresholds with regards to the implementation of contract tracing apps, where one would allow the use of such apps and others wouldn’t.

Other questions may relate to software AI Systems used in hiring processes for example, where AI systems potentially discriminate based on inherent bias in training data (e.g. age, gender or unknown factors). Similar questions can be raised with regards to triaging tools in education systems, if the AI systems evaluates that one person requires additional training and another may not based on the same test results due to unknown characteristics. Persons may not be subjected to equal treatment and may be discriminated against by AI Systems. In the Netherlands the taxing authorities recently were fined for 2.7 million Euros over algorithms discriminating against people with double nationality.

For a risk assessment this could mean that impact on fundamental rights has to be assessed taking member state considerations into consideration.

Environmental risks

As part of the assessment of the risks to fundamental rights organizations will have to consider the potential implications on the environment (as explained in Recital 28a). By itself, the principle of considering the environment is important in AI applications. A recent article posted by Earth.org provides some insights into the environmental implications of AI. The article brings up a list of potential implications by the use of AI, e.g. its carbon footprint (a recent paper has shown that training of Large Language Models even emits as much as 300,000 kg of Co2), electronic waste disposal and implications on natural ecosystems.

As an organization developing AI systems, it may be difficult to appropriately assess the actual risks associated with the development and use of AI systems. Most organizations will use external GPU resources (e.g. GCP, AWS or Azure).

How can organizations effectively and accurately assess the risks associated with the use of GPU resources? Would this require tight cooperation with the providers of the GPU units and their disclosure of environmental risk assessments? How can organizations weigh the benefit of the development and use of the AI system and the potential environmental risks against the benefit AI brings?

What is needed on top of existing Risk Management Systems?

Risk Management Systems that are used today by manufacturers of (high risk) AI devices listed in Annex II Section A may not be sufficient to demonstrate conformity against the AI Act.

Types of risks

Manufacturers will need to include the above discussed fundamental rights and environmental risks on top of safety risks. In addition, the AI Act calls for specific risks to be taken into consideration, an overview of those is presented below.

ReferenceRisk
Article 10.2 (f)Potential bias risks that can affect health and safety or lead to discrimination
Article 9.8Risk with regards to vulnerable groups of people or children
Article 14.4 (b)Automation bias (e.g. users automatically relying or over-relying on the output produced by the AI)
Article 15.3Risks associated with continuous learning capabilities shall eliminate or reduce risks, as far as possible, associated with learning on data which is selected on the basis of biased output (feedback loops).
Article 15.4Cybersecurity risks specific to AI systems, e.g.: Attacks manipulating training data (data poisoning), Inputs designed to cause the model to make mistakes (adversarial examples), Model flaws.
Article 16.1 (b)Natural persons to whom human oversight is assigned should be made aware of the risk of automation or confirmation bias

*Article 10.2 (f) proposed in amendment 285

Risk management methodology

Where the AI Act leaves room for interpretation for standards to provide future guidance is on is the risk management methodologies to be applied. In the (draft) standardization request issued by the European Commission to CEN/CENELEC, the EC specifically requests for standards to be developed to support the AI Act.

Within CEN/CENELEC’s JTC21, it has been identified that a standard to address risk management requirements has been identified and a proposed work plan has been developed and provided to the European Commission. Within SC 42 there are standards already in place, however those do not specifically address the risk management methodologies from the AI Act, and do not directly align with the safety risk management logic implemented into the AI Act. Consequently, this standard (ISO/IEC 23894) is not expected to become harmonized under the AI Act (author’s interpretation) by itself.

Such risk management methodologies will need to take into account the definition of ‘Significant Risk’ and should address the topics of Fundamental Rights and the impacts on the Environment.

What is known is the basic definition of risk, which aligns with the definition set out in ISO Guide 51, which is the basis for Safety Risk Management standards. For now, organizations can focus their risk assessment on evaluating the probability of occurrence of harm (i.e. injury or damage to the health of people, or damage to property or the environment), against the severity of that harm. When doing so, organizations should consider whether risks, when leading to harm, affect specific persons, or groups of persons, and in the case of groups of people consider what the specific risk is to those groups of people, instead of assessing the risk over the entire group. 

Risk management examples

Let’s take an example of a risk that can affect safety and health, the environment, and in addition potentially fundamental rights. During the COVID-19 pandemic, in the European Union and beyond hospitals struggled with their ICU capacities and with the availability of suitable equipment for the treatment of specific symptoms. This led to major pressure on healthcare systems throughout the European Union.

In our example, let’s examine a trained AI system that is intended to triage patients to the ICU unit. The AI system will assess patient characteristics (e.g. age, body composition, gender, symptoms, concomitant diseases, etc) to decide who needs care the most urgently, and as such should be moved into the ICU. When ICU units are full, such a system can take it one step further and decide which patients should be prioritized to an ICU bed based on their chances of survival and the highest number of quality of adjusted life years (QALY’s).

1. Risk to health and safety

Such a system will clearly put patients at risk, specifically patients who may be delayed access to the ICU, as other patients are being prioritized.

In order to assess the risk to health and safety of an individual, one would consider the basic risk principle ‘probability’ x ‘severity’ = Risk.

RiskProbabilitySeverityRisk mitigationsRisk - Benefit analysis
A patient dies due to lack of specialized ICU care1 in 10 patients triaged by the device dieHighMight be impossible to mitigateThe risk of a patient dying will likely not outweigh the benefit of more people surviving

2. Risk to the environment

For the same type of system, the AI may be continuously retrained and updated, especially in a situation such as a pandemic, when more patients become available with similar but different symptoms from the normal situation, the algorithm will need to be retrained, probably multiple times. Based on the given information on CO2 emissions, the retraining may emit more CO2 than desired.

In order to assess the risk to the environment, we have seen that the same definition as for safety risk management can be applied, probability x severity = Risk.

RiskProbabilitySeverityRisk mitigationsRisk - Benefit analysis
The device is continuously kept up-to-date through retraining due to changes in patient population (e.g. during a pandemic)Retraining is executed 1 time per monthHighMight be difficult to mitigateThe produced CO2 is weighed against the benefit of the use of the device

Here it can already be observed that a risk assessment for impact on the environment is not necessarily compatible with a risk assessment for health and safety. e.g. an organization might want to consider a frequency for e.g. CO2 emissions rather than a probability. In addition, the scoring metrics are likely going to be different than those for health and safety.

A complexity may be that in this instance, organizations will need to weigh the overall environmental impact of the device against the benefit of the use of the device, which is on a different axis (i.e. health and safety).

3. Risks to fundamental rights

When considering the implications on fundamental rights, today, there is no specific defined risk management methodology that provides guidance on how to execute such a risk assessment. Nevertheless, it seems likely that the methods we commonly use ‘probability’ and ‘occurrence’ may not apply. When a device risks affecting the fundamental rights of human beings, it may not be possible to bring such a device to the market, e.g. it would break the law.

What could this look like?

RiskImpact on fundamental rightsProbability of occurrenceSeverityBenefit/RiskTransparency needs
The device discriminates and provides younger persons access to the ICU as they have better survival chances, and gain more QALY’sAccess to healthcare, Right to equal treatmentHighHighThe benefits of the use of the device outweigh the potential risks to fundamental rightsInstructions for use will clearly indicate the potential implications on human rights

It is difficult to understand how these risks can be mitigated other than being transparent to the users, as it could contradict the sole purpose of the device to aid humans in making data-driven decisions. In these situations, organizations will need to ensure that their AI algorithms are transparent about the key considerations made by an AI Algorithm.

The following article provides a relevant example when such decisions are made by humans, similarly the AI should demonstrate the reasoning behind its decisions.

Overall conclusions

First of all, it should be noted that scoring criteria for risks associated with health and safety, the environment and fundamental rights, probably will not fit within a single approach. Organizations should prepare their risk management plans and adapt those to address each of these topics.

Make sure that you use tools that allow for different scoring criteria to be applied within your risk assessments.

On a second note, organizations should define how environmental risks weigh against risks to the health and safety of human beings. For example, how much damage to an environment is acceptable for improving the lives of human beings.

On a third note, AI Algorithms inherently may affect fundamental rights, and they may even be intended to do so for the benefit of society. The role of a risk benefit assessment is of high importance. Similarly organizations should be transparent about such known risks and communicate these clearly to the user, for example in labeling materials.

About the Author
Leon Doorn
Independent Consultant