top of page
  • Ana Teles

Decoding the EU AI Act: What compliance requirements must providers of high-risk AI systems meet?

A visually striking image featuring a close-up of a human face seamlessly merged with a robotic, AI-enhanced face. The robotic half is adorned with blue and yellow elements, resembling the EU flag stars, symbolizing the intersection of human and artificial intelligence under the European Union's AI regulations. The human eye and the robotic eye both gaze intently forward, emphasizing a fusion of human oversight and advanced technology.
Generated with leonardo.ai

Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies


 

In this article, we delve into the essential requirements that high-risk AI systems must adhere in order to comply with the EU AI Act.

 

Unveiled by the European Commission in April 2021, the Act sets out to create a legal framework for governing the use of AI systems. It marks a substantial advancement in global technology regulation.


Make sure to get your complimentary Palqee EU AI Act Framework Join the PAM BETA program: PAM is the AI observability solution for AI systems.


 

What is a High-Risk AI System?


Imagine this, your company is developing an AI system. It's cutting-edge and full of potential. But is it considered high-risk in the EU?

 

To find the answer, we need to understand the specifics of the European Union's Artificial Intelligence Act (EU AI Act), which outlines the factors that determine whether an AI system qualifies as high-risk.

 

Defining High-Risk AI Systems 


According to the EU AI Act, a  high-risk AI system is one that could potentially cause significant harm to the health, safety, or fundamental rights of people or businesses. This harm could stem from the system's operation, impact, or use.

Some examples of high-risk AI systems include, but are not limited to: 


  • AI systems used for critical healthcare procedures, such as surgical robots.

  • AI systems in transport like autonomous vehicles.

  • AI systems in public infrastructure that manage traffic or utility distribution.


Obligations of High-Risk AI System providers


Considering your company is working on a high-risk AI system, you might be wondering: "what are the compliance obligations under the EU AI Act?"


To summarise, high-risk AI system providers bear several obligations, including:


  • Ensuring that their high-risk AI system adhere to the requirements of the Act;

  • Establishing a quality management system;

  • Undergoing the appropriate conformity assessment process before being introduced to the market or deployed for use;

  • Attaching the CE mark to the high-risk AI system.


Key Requirements for High-Risk AI Systems under the EU AI Act


Let's delve into the mandatory requirements that high-risk AI systems providers are obligated to meet under the proposed Act:

 

1. Risk Management System

The risk management system involves the systematic identification, assessment, and mitigation of potential risks throughout the AI system’s lifespan. Key aspects include:

  • Risk Identification and Evaluation: It involves identifying known and potential risk and analysing the AI system for potential vulnerabilities, including post-market data analysis.

  • Controls for unavoidable risks: It entails designing measures to eliminate or reduce risks as well as having safeguards in place to manage risks that cannot be entirely eliminated.

  • Providing user information and training: Involves equipping users with the necessary knowledge and skills to interact safely with the AI system. This could include clear guidelines on system usage, potential risks, and steps to take in case of anomalies.

  • Tailoring strategies to user expertise: Adapting risk management approaches based on users' technical knowledge and experience. For less tech-savvy users, providing intuitive interfaces and simplified instructions can prevent unintended actions.

 

2. Data Governance 

Data governance is a necessary requirement outlined by the EU AI Act. It involves the organised management of data, covering design decisions, data collection processes, preparation procedures, and beyond.

 

To attain this objective, providers can demonstrate their commitment to implementing data governance through a range of approaches, including:

 

  • Implementing policies and procedures for data collection, preparation, and validation;

  • Ensuring data sets are relevant, error-free, representative, and complete, meeting statistical requirements;

  • And addressing potential biases, gaps, or shortcomings in the data.

 

3. Technical Documentation

Like any regulatory matter, companies can't simply claim to "comply" with such measures; they need to provide evidence. In this context, technical documentation refers to a detailed record that companies must generate and upkeep for high-risk AI systems before they're introduced to the market or put into use. This documentation serves a dual role:

 

A) Demonstrating compliance with all requirements outlined in the Act;

B) Furnishing national competent authorities and notified bodies with the essential data to evaluate the AI system's alignment with those stipulations.

 

It should encompass elements such as:

  • a general description of the AI system;

  • detailed information about the monitoring, functioning and control of the AI system; and,

  • a detailed description of the risk management system adopted by the company.

 

4. Record-Keeping

To ensure the accountability and transparency of high-risk AI systems, AI system providers need to establish comprehensive record-keeping practices. These practices involve setting up automated event recording, commonly referred to as 'logs', during the system's operation. These logs should follow recognised industry standards or widely accepted specifications.

 

The primary goal of implementing logging capabilities is to track the system's functioning from start to finish, aligning with its intended purpose. This approach offers continuous insights into the AI system's performance, allowing the identification of potential risks or significant changes. Further, it also supports continuous post-deployment monitoring.

 

5. Transparency

Companies must design and develop these systems with a level of transparency that aligns with users' obligations and provider responsibilities. To achieve this, high-risk AI systems should be accompanied by clear and concise instructions for use, either in digital format or other accessible forms.

 

These instructions should offer relevant and comprehensible information, such as:

  • providers contact details;

  • high-risk AI system's characteristics, capabilities, and limitations;

  • human oversight measures, along with technical strategies to enhance users' interpretation of AI system outputs.

 

6. Human Oversight 

To prevent excuses like "the system is the only responsible," the proposed Act emphasises the significance of human oversight as a key requirement for high-risk AI systems' compliance.

 

To accomplish this within a company, it's necessary to put in place robust policies and procedures, as follows:

  • incorporate interface tools facilitating natural oversight during design;

  • develop policies outlining oversight measures;

  • empower users to monitor, interpret, and intervene as required

  • introduce supplementary measures tailored to specific AI system traits, including user interaction, data management, and adaptive learning algorithms.

 

7. Accuracy, Robustness and Cybersecurity 

Finally, according to the EU AI Act, companies must guarantee the resilience and safety of high-risk AI systems, handling concerns like data poisoning, bias in adversarial examples, and model flaws.


This involves:  

a) conducting third-party audits,  

b) implementing automated quality checks, and  

c) performing thorough stress tests. 

 

By following this strategy, you can ensure accuracy, robustness, and cybersecurity, effectively tackling biases and preventing unauthorised alterations, as elaborated upon in greater detail within this article.


Establish a Quality Management System


A Quality Management System (QMS) is a structured framework that providers of high-risk AI systems must establish to ensure their compliance with regulatory standards. This system is meticulously documented through written policies, procedures, and instructions.

 

It covers a variety of factors, such as:

 

a) Regulatory Compliance Strategy;  

b) Design and Development;  

c) Quality Control and Assurance;  

d) Examination and Validation Procedures;  

e) Data Management, and more.


Undergo a Conformity Assessment and get a CE Mark


The conformity assessment plays an essential role in confirming the AI systems' compliance with the requirements set out in The EU AI Act. There are two potential scenarios for high-risk AI system providers:

 

1) For high-risk AI products currently governed by existing Product Safety Legislation, providers have to prove compliance with the Act's requirements through a third-party conformity assessment.

 

This means that if your high-risk AI system falls within the scope of such legislation, you will have to do the conformity assessment with a notified body. 

 

2) For high-risk AI systems that are not currently governed by such regulatory frameworks, providers are responsible for conducting their own conformity assessments.

 

This involves having the QMS in place which ensures that the high-risk AI system aligns with the requirements set by the Act.  

 

Once the conformity assessment is successfully completed, providers need to document their system's compliance and register it in the EU database designated for stand-alone high-risk AI systems, as specified in Article 60 of the Act.

 

This database serves as a vital repository of information regarding high-risk AI systems for regulatory authorities and the public. 

 

Lastly, let's discuss the significance of the CE mark.


If you live in Europe, you've probably seen this mark on several products. It symbolises safety and compliance, serving as an assurance that the product fulfils all essential EU requirements to sell it in the market.  How does this tie into AI systems?


  • Eligibility: Only high-risk AI systems that have undergone and passed the conformity assessment can carry the CE mark.

  • Relevance: The CE mark is a visual indication that the AI system is compliant with the stipulations of the EU AI Act.

  • Importance: Apart from enhancing user confidence in AI systems, the CE mark carries exceptional importance. Its absence would prohibit your company from selling, offering, deploying, or putting into use the high-risk AI system within the EU. 


This mark acts as a gateway to market access, empowering you to confidently present your product to consumers in the European market.


Conclusion


The EU AI Act's impact on high-risk AI systems is profound and far-reaching. By embracing these obligations, businesses can navigate the AI landscape responsibly and contribute to the ethical and effective deployment of AI technologies.

 

With transparency, documentation, and oversight as key cornerstones, the EU AI Act prioritises safety, reliability, and accountability.

 

And don't forget! Complying with these standards will not only be a legal necessity but also a key factor in building trust and acceptance among users and society.


Unsure if your AI system is considered high-risk according to the EU AI Act? Take the free High-Risk Risk Category Assessment:


Comments


bottom of page