top of page
  • Ana Teles

Decoding the EU AI Act: Corrective Actions and Cooperation with Authorities for High-Risk AI Systems Providers


Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies


 

As we continue our series "Decoding AI: The European Union's Approach to Artificial Intelligence," we focus on the responsibilities of providers of high-risk AI systems under the EU AI Act.

 

In this post, we outline the corrective actions required from providers of high-risk AI systems. When companies find their systems non-compliant with EU regulations, they must promptly fix the issue, cease using the system, or recall it.

 

Moreover, providers are also responsible for cooperating with authorities by offering information about the high-risk AI system in a language that competent authorities can understand, and granting access to automatically generated logs, provided they control them.


Make sure to download our complimentary Palqee EU AI Act Framework A guide to navigating the EU AI Act regulatory requirements.

 

Corrective Actions for High-Risk AI Systems

 

According to Article 20 & 21, providers of high-risk AI systems must act promptly if they suspect their AI system is not compliant with the EU AI Act. Immediate steps must be taken to rectify the issue, which include:

 

1. Rectification: Apply updates or modifications to resolve non-compliance issues. This ensures the AI system meets the required standards and operates within the legal framework.

 

2. Withdrawal: Remove the AI system from the market if it poses a significant risk. By doing so, providers can prevent potential harm to users and ensure the system does not violate regulatory requirements.


3. Disabling: Temporarily or permanently disable the system to prevent further use.

 

4. Recall: Remove the system from use by affected individuals to mitigate risks. This step helps to protect users from any potential dangers associated with the non-compliant AI system and ensures their safety.

 

All these measures need to be taken in alignment with relevant stakeholders. This involves communicating with distributors, deployers, authorised representatives (if applicable), and importers about the corrective measures being implemented.

 

It ensures that everyone involved in the supply chain is aware of the non-compliance and the actions being taken to address it.


 

Investigating and Reporting Risks

 

When a high-risk AI system poses a risk to health, safety, or fundamental rights, providers must promptly investigate the root cause.

 

This involves a series of steps:

 

1. Initiate an Investigation: Promptly investigate the root cause of the risk.

2. Collaborate with Deployers: Work with deployers, if applicable, to identify and understand the issue.

3. Document Findings: Record the details of the investigation, including the nature of the risk and its potential impact.

4. Report to Authorities: Inform the relevant market surveillance authorities and any notified bodies about the risk, the investigation findings, and the corrective actions taken.

 

In accordance with the EU AI Act, it is also the responsibility of a member state's market surveillance authority to evaluate an AI system's compliance with the law if it is believed to present a risk.

 

If the system is found to be non-compliant, the authority can require the provider to correct the issue, withdraw the system from the market, or recall it. Additionally, the authority needs to inform the Commission and other member states about non-compliant system identified.

 

If the company fails to take corrective action, the authority can prohibit or restrict the system's availability.


Collaboration between Providers and Regulatory Authorities

 

  • Providing Documentation

 

Providers of high-risk AI systems must cooperate with regulatory authorities when requested.

 

This includes supplying all necessary information and documentation to demonstrate the AI system's compliance with the EU AI Act. The documentation should be clear, comprehensive, and in an official EU language as specified by the requesting Member State.

 

  • Access to System Logs

 

If necessary, authorities may also request access to the automatically generated logs of the high-risk AI system.

 

These logs are essential for tracking the system's operations and identifying any issues or non-compliance. By analysing these logs, authorities can trace the actions and decisions made by the AI system, pinpointing any anomalies or deviations from regulatory standards.

 

It is the provider's responsibility to ensure these logs are accessible and comprehensible, maintaining transparency and accountability.

 

  • Maintaining Confidentiality

 

All information provided to authorities must be treated with strict confidentiality. This ensures the protection of sensitive information about the AI system and its operations, safeguarding both the provider's proprietary information and the integrity of the regulatory process.


Establishing Internal Processes for Corrective Actions and Regulatory Cooperation

 

To ensure compliance with the EU AI Act, especially regarding the requirements for high-risk AI systems, providers must establish internal procedures for implementing corrective actions and cooperating with authorities.


Key steps involve implementing an effective post-market monitoring system and centralising the documentation of measures taken during system development. Additionally, fostering a strong interconnection between the AI development team and the compliance team is crucial.

 

These processes must include:

 

1. Compliance Monitoring:


  • Regularly review and audit AI systems to identify potential non-compliance issues.

  • Implement automated monitoring tools to detect anomalies and risks in real-time.

  • Ensure a dedicated compliance team is in place to oversee and manage these activities, and to collaborate with the AI development team to anticipate and prevent potential compliance issues.

 

2. Clear Reporting Procedures:


  • Develop clear internal protocols for reporting non-compliance issues or risks.

  • Establish an internal communication channel for efficient reporting and information exchange, including clear guidelines on how to share information with regulatory authorities.

Not sure if your AI system is high-risk under the EU AI Act? Take the free High-Risk Risk Category Assessment:

3. Documentation and Record-Keeping:


  • Maintain comprehensive records of all AI system operations, updates, and modifications in accordance with standard requirements for high-risk AI systems.

  • Ensure all documentation is organised and readily accessible for review by regulatory authorities.

  • Use standardised templates for documenting investigations, corrective actions, and compliance status.

 

4. Stakeholder Communication:


  • Develop a communication strategy to inform all relevant stakeholders, including distributors, deployers, authorised representatives, and importers, about corrective actions and compliance measures.

  • Schedule regular updates and briefings to keep stakeholders informed of any changes or issues.

 

5. Training and Awareness:


  • Conduct regular training sessions for employees and stakeholders on compliance requirements and procedures.

  • Provide resources and support to help AI team members understand their responsibilities in ensuring the system remains compliant with the EU AI Act.

 

6. Response and Remediation Plans:


  • Develop detailed response plans for addressing non-compliance issues, including timelines and responsible parties.

  • Implement remediation measures swiftly to mitigate risks and ensure continued compliance.

  • Conduct post-incident reviews to evaluate the effectiveness of the response and improve future processes.

 

In summary

 

Addressing issues of high-risk AI systems and sharing information with regulatory authorities are key responsibilities for providers. This goes beyond legal compliance; it is necessary because any problems with such systems can impact fundamental aspects of individuals' lives, such as health and safety.

 

Implementing these procedures is similar to the steps companies take when adjusting a defective product in compliance with laws, such as EU General Product Safety Directive (2001/95/EC), or when notifying a data breach under regulations like the GDPR.

Comments


bottom of page