top of page
  • Ana Teles

Decoding the EU AI Act: How to Enhance Accuracy, Robustness, and Cybersecurity in High-Risk AI Systems?

An illustration depicting a high-tech European Union (EU) framework superimposed on a map of Europe. The design features symbols representing AI systems, cybersecurity shields, and data accuracy. The background includes a blend of digital code and futuristic cityscapes, symbolising AI and technology. The colour scheme uses blues, whites, and metallic tones to convey a sense of technology and security.
Image created by DALL-E

Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies


 

The EU's AI Act is a comprehensive law that regulates artificial intelligence across Europe. Developers and providers must familiarise themselves with this new regulation if they intend to introduce their products throughout the union.

 

If an organisation is developing high-risk AI systems, the requirements become more stringent. This means it must adhere to specific obligations including risk management, data governance, technical documentation, record-keeping, and transparency.

 

In this post, we will explore the components outlined in Article 15 of the Act: Accuracy, Robustness, and Cybersecurity.


Not sure if your AI system is high-risk under the EU AI Act? Take the free High-Risk Risk Category Assessment:


 

Understanding the Requirements of Accuracy, Robustness and Cybersecurity

 

Under the EU AI Act, high-risk AI systems must be designed and developed to achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity.

 

The accuracy of these systems ensures their reliability and effectiveness, requiring an approach grounded in statistical principles and established methodologies in machine learning.

 

Robustness, on the other hand, refers to the ability of the system to operate correctly across various inputs and conditions without failure.

 

Furthermore, under the AI Act, cybersecurity involves protecting AI systems from unauthorised access, data breaches, and other cyber threats that could jeopardise system integrity and data confidentiality.

 

Key steps include:

 

  • Development and Testing:

    • The AI model must be developed and regularly tested to perform as intended under various conditions.

    • Only after thorough testing in different scenarios can the provider confirm that the model meets all specified requirements before deployment.

 

  • Performance Metrics:

    • Following development, it is necessary to select appropriate metrics that accurately measure the system's performance.

    • Additionally, the accuracy levels and assessment metrics for high-risk AI systems must be documented.

    • The Act mandates that this information is included in the system instructions to ensure transparency and compliance.

 

  • Ongoing Monitoring:

    • AI systems require continuous monitoring post-deployment.

    • Providers must regularly evaluate the system's performance to identify any emerging issues, allowing for necessary adjustments and improvements.

 

Collaboration and Standardisation

 

To ensure that providers can measure the consistent performance of high-risk AI systems throughout their operational life, the Act mandates that the EU Commission supports the standardisation of these metrics.

 

This involves collaborating with stakeholders and organisations, including metrology (provide accurate and reliable measurements for trade, health, safety, and the environment) and benchmarking authorities, to promote the development of relevant benchmarks and methodologies. 

 

Technical and Organisational Strategies

 

To adhere to the EU AI Act’s requirements for accuracy, robustness, and cybersecurity, organisations must adopt technical and organisational measures, including:

 

  • Automated Retraining: Implement automated retraining frameworks to keep models up to date. This can be achieved using Machine Learning as a Service (MLaaS) products that automate the retraining process. Examples of such services include Azure Machine Learning, and Google AI Platform, which provide tools to streamline model updates and enhance model accuracy by incorporating new data sets seamlessly.

 

  • Performance Monitoring: Continuous monitoring of model performance and the detection of data drift ensures that models remain effective over time. To streamline this process, organisations can implement systems like Weights & Biases, DataRobot, Fiddler AI and Palqee’s PAM for bias monitoring. These platforms offer integrated solutions for monitoring and evaluating performance as well as detecting data drift.

 

  • Security Protocols: Ensure that high-risk AI systems are protected against errors, faults, inconsistencies, and cybersecurity threats, such as data poisoning, model poisoning, adversarial examples, confidentiality attacks, and other AI-specific vulnerabilities. This involves deploying advanced cybersecurity measures tailored to the specific risks of high-risk AI systems, including robust encryption, hardening methods to protect against prompt injection attacks, secure coding practices, regular security audits, and real-time threat monitoring.

 

  • Technical Redundancy: Implement redundancy solutions, such as backup systems and fail-safe mechanisms, to maintain operations during hardware or software failures and enhance system robustness.

 

  • Bias Mitigation in Learning Algorithms: AI systems that continue to learn after deployment must include strategies to detect and mitigate bias. This includes the development and deployment of methods such as Palqee’s PAM capable of identifying and correcting biased data inputs or feedback loops to prevent them from adversely affecting future decisions.

 

  • Stakeholder Collaboration: Engage in ongoing collaboration with industry experts, regulatory bodies, and other stakeholders. This cooperation ensures staying abreast of the latest security trends and compliance requirements, while also contributing to the development of industry-wide benchmarks and the standardisation of performance metrics.

 

In summary

 

Article 15 of the EU AI Act challenges AI developers to raise the bar for accuracy, robustness, and cybersecurity in high-risk AI systems. But this goes beyond mere regulatory compliance.

 

Consider high-risk AI systems used in migration, asylum, and border control.

These systems deeply affect individuals in vulnerable positions, whose futures depend on public authority judgments. High accuracy, robustness, and cybersecurity in these technologies mitigate the risk of errors and enhance fairness in processes that have substantial personal consequences.


Don't forget to check out the Palqee EU AI Act Framework. It's a step-by-step guide to ensure you're aligned with AI compliance standards:


Comments


bottom of page