The European Artificial Intelligence Act (EU AI Act) establishes a regulatory framework for the marketing and deployment of AI systems in the European Union. A central element of this regulation is the "high-risk" category, designated for systems with the potential to negatively affect the health, safety, or fundamental rights of individuals.
Understanding whether a system falls into this category is crucial, as it triggers a series of mandatory technical and governance requirements. To this end, Article 6 of the EU AI Act establishes two main classification pathways:
- AI systems that are safety components in products already subject to EU legislation, as detailed in Annex I.
- AI systems that operate in critical areas specifically listed in Annex III of the Regulation.
AI Systems as Safety Components (Annex I)
The first category, detailed in Annex I of the Regulation, refers to AI systems that function as safety components within products already subject to strict Union harmonization legislation. If an AI software is integrated into any of the following domains, it is classified as high-risk by default:
| Product Domain | Specific Regulation | Scope of Application | | --- | --- | --- | | Machinery and Robotics | Directive 2006/42/EC | Machinery and safety components | | | Directive 2014/33/EU | Lifts and safety components | | | Directive 2014/53/EU | Radio equipment | | Road Transport | Regulation (EU) 2018/858 | Motor vehicle approval | | | Regulation (EU) 2019/2144 | General vehicle safety | | Rail Transport | Directive (EU) 2016/797 | Railway system interoperability | | Air Transport | Regulation (EU) 2018/1139 | Common civil aviation rules | | Healthcare | Regulation (EU) 2017/745 | Medical devices | | | Regulation (EU) 2017/746 | In vitro diagnostic medical devices | | Toys | Directive 2009/48/EC | Safety of toys | | Protective Equipment | Regulation (EU) 2016/425 | Personal protective equipment |
In summary, if an AI system is developed with the intention of being used as a safety component in any of the listed products, or if the AI system itself is a product subject to such legislation, it is automatically classified as high-risk.
This obliges developers to integrate the EU AI Act's requirements into the existing compliance frameworks for these products. Thus, the vast majority of medical AI products—from diagnostic imaging analysis software to clinical decision support systems—already covered by the MDR or IVDR regulations would fall into this category.
AI Systems for Critical Areas (Annex III)
The second category focuses on the intended use of the system. An AI is presumed to be high-risk if it is intended to operate in any of the following areas and specific use cases:
| Critical Area | Specific Use Cases | | --- | --- | | Biometric identification | "Real-time" and "post" remote biometric identification systems | | Critical infrastructure | AI systems as safety components in traffic management and supply of water, gas, heating, and electricity | | Education and training | Systems to determine access to educational institutions or assess students | | Employment and worker management | Systems for selecting candidates, assigning tasks, or evaluating performance | | Access to essential services | Systems for assessing creditworthiness, eligibility for public benefits, or prioritizing emergency services | | Law enforcement | Systems for assessing recidivism risk, polygraphs, evidence analysis, or crime prediction | | Migration, asylum, and borders | Systems for assessing security risks, verifying travel documents, or assisting in asylum applications | | Justice and democratic processes | Systems to assist judicial authorities or influence voting behavior |
The Key Exception in Article 6
The regulation introduces a relevant exception. A system operating in the aforementioned critical areas might not be considered high-risk if its output is purely accessory to the corresponding action and, therefore, does not substantially influence its outcome.
The relevance analysis is key: if the AI system only performs a preparatory task and the final decision depends on a human who can easily verify or dismiss the recommendation, the system may not be classified as high-risk.
Mandatory Requirements for High-Risk AI Providers
Once a system is classified as high-risk, providers must demonstrate diligence throughout its lifecycle, which translates into specific technical and governance requirements:
-
Implement a risk management system (Article 9) for the continuous identification, evaluation, and mitigation of the model's potential hazards.
-
Ensure data quality and governance (Article 10), ensuring that training, validation, and testing datasets are representative, free of biases, and their provenance is documented.
-
Create and maintain technical documentation (Article 11) comprehensive enough to allow a third party to understand the system's architecture, algorithms used, and its limitations.
-
Design the system to generate automatic logs (Article 12) to ensure the traceability of its decisions and allow for incident investigation.
-
Ensure transparency (Article 13), providing end-users with the necessary information to understand that they are interacting with an AI system.
-
Incorporate human oversight mechanisms (Article 14) that allow a person to monitor, intervene, or stop the system if necessary.
-
Ensure technical robustness, accuracy, and cybersecurity (Article 15) of the models against errors and external attacks.
Facing these requirements without a proper framework represents a considerable consumption of engineering resources, diverting the team's focus from the core innovation of the product.
The Solution: Venturalítica
Venturalítica offers a SaaS platform designed to translate regulatory complexity into a working framework for development teams, integrating compliance into the software lifecycle.
Our platform provides the following capabilities:
-
Assistance in Risk Classification: The distinction between "substantial influence" and "accessory" is complex. Our intelligent assistant guides teams through this impact analysis for a correct system classification from the initial phases.
-
Comprehensive Requirements Management: Venturalítica provides a framework and tools to implement each of the EU AI Act's obligations, from risk management to the generation of technical documentation.
-
Optimization of "Time to Compliance": Our key metric is reducing the time needed to achieve compliance. We transform a process that could take months of consulting into an agile and managed workflow, freeing up development resources.
Navigate with confidence in the regulated AI environment. Use the regulation as a guide and let Venturalítica be your strategic ally.
Request a demo to learn how Venturalítica can integrate into your development cycle and ensure your system's compliance with the EU AI Act.
In the next post in this series, we will delve into Article 9, concerning risk management systems.