top of page
  • Ana Teles

Decoding the EU AI Act: What does Human Oversight mean?

Updated: Jun 5

A detailed illustration of a futuristic cityscape with significant landmarks such as Big Ben, representing a blend of modern and traditional architecture. In the foreground, a holographic human figure surrounded by various technological icons and symbols, including gears, shields, and data charts, signifies AI systems and cybersecurity. In the background, the EU flag is prominently displayed, along with elements like satellites, indicating the European Union's focus on technology and innovation. The colour palette includes blues, golds, and metallic tones to convey a sense of advanced technology and security.
Image created by DALL-E

Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies


 

In our "Decoding AI" series, we explore the transformative impact of the EU's AI Act on AI developers looking to introduce their products across Europe.

 

Until now, we have discussed the requirements for developers of high-risk AI systems under the Act, including aspects such as risk management, data governance, technical documentation, record-keeping, and ensuring transparency and information availability to deployers.

 

This article covers another important requirement under the EU AI Act for high-risk AI systems providers: Human Oversight.

 

This post addresses the need for human supervision to prevent and mitigate risks beyond the system's capabilities. It discusses how the legislation ensures that these systems can be effectively overseen by natural persons during their use.



Ensure you check out the Palqee EU AI Act Framework It offers a straightforward guide to assist you in meeting AI compliance requirements.


 

The Foundation of Human Oversight

 

Under the Act, high-risk AI systems must be carefully constructed to facilitate effective supervision by humans, particularly in sectors like healthcare, transportation, and public safety where the consequences of errors in AI could be significant. In healthcare, for example, malfunctions in AI-driven tools can result in serious patient harm due to incorrect treatment decisions.

 

This oversight aims to mitigate risks to fundamental rights during the usage of these systems. It involves ensuring that the AI system, even while functioning autonomously, remains under human control and is responsive to human intervention.

 

Thereby, high-risk AI systems must include mechanisms for effective monitoring and control throughout their operational lifespan, allowing for real-time oversight by natural persons.


Proportionate Measures

 

According to the EU AI Act, Human Oversight measures must be tailored to match the level of risk, the autonomy of the AI system, and its specific usage context.

 

They can be ensured through one or more of the following types of measures in organisations developing high-risk AI systems:

 

  1. Built-in Measures: These preventive measures are identified and integrated into the high-risk AI system by the provider prior to its market release or service implementation, whenever technically feasible.

 

In practice, this includes:

 

  • Initial Risk Assessment: Organisations conduct a risk evaluation before development to identify potential hazards the AI system may pose in various use contexts.

  • Secure Design: Incorporate security solutions into the AI system design. This could involve implementing data security protocols, machine learning techniques resistant to adversarial attacks, and mechanisms to ensure data privacy.

  • Testing: Before launching the product, organisations must conduct thorough tests to ensure all security measures function correctly across different use scenarios.

  • Documentation and Traceability: Keep comprehensive records of the development process, documenting design decisions and risk analyses to ensure transparency and ease future reviews.

 

2. External Measures: These are measures identified by the provider before the system is released or put into service, intended to be implemented by the user to ensure ongoing oversight.

 

In practice, this can be achieved through:

 

  • Training and Education: Provide users with training on the operation of the AI system, its potential risks, and the correct usage procedures.

  • Guidelines: Supply clear manuals and best practice guidelines that users must follow to ensure the safe operation of the system.

  • Ongoing Support: Offer continuous technical support and security updates for the AI system, ensuring that security measures are maintained and updated as needed.

  • Compliance Monitoring: Establish mechanisms to monitor how users are implementing the recommended security measures and intervene if improper practices are detected.


Implementing Human Oversight in High-Risk AI Systems within Your Organisation

 

Adding Human Oversight to an AI’s development lifecycle makes sense in many ways: It involves relevant stakeholders in the process, helps put in place clear accountability structures and is a strong measure to achieve fairer AI systems especially for high-risk use cases.

 

This is why the EU AI Act puts a great emphasis on Human Oversight measures. However, this comes with potential additional burdens for data science and dev teams.

 

Human Oversight carries the risk of slowing down development. Depending on the approach to implementation, it can turn into a check box exercise that mainly impacts scalability and speed to deployment because it adds complicated procedures to the teams’ workload.  

 

If implemented properly, Human Oversight should become just a standard procedure integrated into your general development lifecycle that doesn’t disrupt work, while allowing you to effectively mitigate risks and provide transparency for all stakeholders.

 

Rather than it’s time to adopt a comprehensive approach in accordance with the Article 14 of the EU AI Act.


Connection to Other Requirements in the EU AI Act

 

Human Oversight doesn’t exist in isolation and is part of a broader framework for AI compliance and achieving AI Trustworthiness.

 

It complements various other requirements outlined in Chapter 2 of the Act, including data governance, technical documentation, transparency, and others.

 

Together, these requirements form an integrated framework, guaranteeing the safety, reliability, and adherence to respecting fundamental rights in AI systems.

 

In summary

 

The emphasis on Human Oversight in the EU AI Act represents a significant step towards responsible AI development. While it presents challenges, it also provides an opportunity for businesses to establish trust and credibility in their AI in the market.

 

This requirement mirrors safety and ethical standards seen in other industries. Similarly, just as a new medication undergoes thorough testing before reaching the public to ensure its safety and effectiveness, regulations and oversight are crucial to ensure that AI systems are developed and deployed responsibly, reducing risks for users.

Comments


bottom of page