top of page
  • Ana Teles

Decoding the EU AI Act: What are the EU AI ACT's Transparency Requirements?

Updated: May 21

The image portrays the concept of "Transparency in AI Systems" with a sleek and futuristic design. It features a high-tech setting with elements like clear glass, light filters, and see-through screens displaying flowing code. A magnifying glass is shown inspecting a computer chip, symbolizing scrutiny and openness. The color palette includes shades of blue and cyan, enhancing the theme of clarity and transparency. This visual is tailored for a tech-oriented audience, emphasizing the ethical dimensions of artificial intelligence.
Image created by DALL-E

Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies


 

In our "Decoding AI" series, we delve into the EU's AI Act and its  role in transforming the landscape for tech developers aiming to market their innovations across Europe.

 

So far, we've covered what developers of high-risk AI systems need to do to meet the Act's standards, including risk management, data governance, technical documentation, and record-keeping.

 

In this article, we're covering Article 13 of the Act which covers Transparency requirements for high-risk AI systems.


Be sure to explore the Palqee EU AI Act Framework It provides a clear guide to help you adhere to AI compliance norms.


 

Understanding Transparency under the EU AI ACT


AI systems that fall into the high-risk category according to the EU AI Act must come with measures for transparency before placing them on the European market.

 

This implies that these technologies must uphold a level of transparency that enables everyone, from end-users to regulators, to clearly comprehend the outcomes of the AI system and put them into practical use, thereby ensuring accountability and trust.

 

What are the specifics of the Transparency Requirements?

 

To meet the transparency requirements, providing a set of details is mandatory. These details encompass:

 

  1. Provider Information: Details about the identity and contact details of the provider or their authorised representative based in the EU.

  2. System Characteristics and Performance: Transparency extends to the AI system itself. The information must include details on the system's intended purpose, accuracy, robustness, and cybersecurity levels.

  3. Circumstances of Use and Foreseeable Misuse: Disclose any known or foreseeable circumstances that might lead to risks associated with the use of the high-risk AI system. This must cover both the intended use of the system and scenarios of reasonably foreseeable misuse, particularly if these aspects could pose risks to health, safety, or the fundamental rights of individuals such as biased outputs.

  4. Performance Regarding Specific Groups: When relevant, include how the system performs for specific persons or groups it is intended to be used by, ensuring it addresses any variations that could affect fairness and effectiveness for different demographics.

  5. Data Specifications: Specifications for input data and other relevant details regarding training, validation, and testing datasets.

  6. System Changes: Any pre-determined changes to the AI system and its performance, providing a clear picture of system evolution.

  7. Human Oversight Measures: As per Article 14 of the EU AI ACT, high-risk AI systems need to be designed in such as way that they allow for human oversight to prevent or minimise risks. This can be done through the inclusion of human-machine interface tools that enable oversight. One such example is Palqee's PAM for bias monitoring.

  8. Expected Lifetime and Maintenance: Insights into the expected lifetime of the high-risk AI system and any maintenance or care measures necessary to ensure its continued proper functioning. This includes software updates to address evolving needs.

  9. Log Management and Interpretation Mechanisms: Description of the mechanisms that ensure proper collection, storage, and interpretation of logs. This encompasses automatic logging of relevant data such as inputs, system behaviour, and user interactions.


Ensuring compliance with Transparency in your organisation

 

Meeting the transparency requirements under the Act can be simplified by following these steps:

 

  • Clear Documentation 

    • This is closely linked to other high-risk AI system requirements such as technical documentation and record-keeping. Your organisation must keep detailed records of all these systems, covering their design, development, and intended use. Depending on the intended audience, documentation needs to be adapted accordingly. Similarly to existing documentation best practices, documentation directed towards tech teams needs to be drafted differently to a knowledge base for end – and perhaps non-technical – users.

    • Ensure this information is straightforward and easy to understand, as it needs to be a clear reference for both internal and external stakeholders.

 

  • User Training

    • Include user training in your risk management system to meet transparency requirements.

    • Implement training for all high-risk AI system users, concentrating on system outputs and correct usage. The aim is to ensure users of the AI system fully understand how to operate it and have awareness about any potential limitations.

 

  • Regular Audits 

    • Assign an Audit Management Leader within your multidisciplinary AI development team.

    • Regularly perform audits to maintain transparency compliance, reviewing the AI system's operations against current guidelines.

  

  • Stakeholder Engagement

    • Actively involve stakeholders, including end-users and regulators, in the AI system's development lifecycle.

    • This involves gathering feedback and adjusting the AI system as needed to maintain the system's transparency and effectiveness.

 

  • Transparent Communication

    • Ensure end-users are always aware when they're engaging with an AI system, except when it's self-evident.

    • This transparency is key to building trust and clarity in AI interactions. It helps users understand when their responses or data are being processed by AI, fostering an environment of informed usage and confidence in the system.

 

  • Continuous Updates and Maintenance

    • Regularly update the AI system to keep your technical documentation, record-keeping, and risk management in line with transparency standards.

    • This should cover both technological upgrades and a review of operational guidelines for ongoing compliance.


Conclusion

 

While emphasised in the EU AI Act, providing transparency through the appropriate level of detailed documentation for different target audiences, is already considered a best practice for most software products and especially in sectors like finance and healthcare.

 

Similarly, in AI, implementing measures for transparency not only helps to build trust but also ensures systems are used responsibly and confidently.

 

It's important to note that complying with this requirement under the EU AI Act, automatically covers many of the requirements for  record-keeping, technical documentation, and risk management as it’s interconnected.


Get your free copy of Palqee’s EU AI Act Framework Get early access to the BETA program of #PAM. The AI observability solution for AI systems.



Comments


bottom of page