The European Artificial Intelligence Act (EU AI Act) establishes a regulatory framework for the marketing and deployment of AI systems in the European Union. A central element of this regulation is the "high-risk" category, designated for systems with the potential to negatively affect the health, safety, or fundamental rights of individuals.
Understanding whether a system falls into this category is crucial, as it triggers a series of mandatory technical and governance requirements. To this end, Article 6 of the EU AI Act establishes two main classification pathways:
AI systems that are safety components in products already subject to EU legislation, as detailed in Annex I.
AI systems that operate in critical areas specifically listed in Annex III of the Regulation.
Below, we explore each of these categories in detail and the requirements they entail.
It is important to highlight a fundamental distinction in the impact this classification has on development processes. For providers of AI systems as safety components in already regulated products, operating in sectors with strict regulation, adapting to the EU AI Act will, in many cases, involve extending their existing Quality Management Systems (QMS) to cover the new specific AI requirements.
Conversely, for developers of AI systems in critical areas, who may not have a formalized QMS, implementing the requirements from scratch represents a more significant challenge, with a deeper impact on their methodologies and development lifecycles.
AI Systems as Safety Components in Regulated Products (Annex I)
The first category, detailed in Annex I of the Regulation, refers to AI systems that function as safety components within products already subject to strict Union harmonization legislation. If an AI software is integrated into any of the following domains, it is classified as high-risk by default:
Product Domain | Specific Regulation | Scope of Application |
Machinery and Robotics | Machinery and safety components. | |
Lifts and safety components for lifts. | ||
Radioelectrical equipment. | ||
Road Transport | Approval of motor vehicles and their trailers. | |
Two- or three-wheel vehicles and quadricycles. | ||
Agricultural and forestry vehicles. | ||
General safety of motor vehicles and protection of occupants. | ||
Rail Transport | Interoperability of the rail system. | |
Air Transport | Common rules in civil aviation (includes unmanned aircraft). | |
Common rules for civil aviation security. | ||
Maritime and River Transport | Marine equipment. | |
Recreational craft and personal watercraft. | ||
Cableway Transport | Cableway installations. | |
Healthcare | Medical devices. | |
In vitro diagnostic medical devices. | ||
Toys | Safety of toys. | |
Personal Protective Equipment | Personal protective equipment (PPE). | |
Industrial Components | Gas appliances. | |
Pressure equipment. | ||
Equipment for potentially explosive atmospheres (ATEX). |
In summary, if an AI system is developed with the intention of being used as a safety component in any of the listed products, or if the AI system itself is a product subject to such legislation, it is automatically classified as high-risk. This obliges developers to integrate the EU AI Act's requirements into the existing compliance frameworks for these products. Thus, the vast majority of medical AI products—from diagnostic imaging analysis software to clinical decision support systems—already covered by the MDR or IVDR regulations would fall into this category. For their developers, this means that the CE marking and certification process for their products must now incorporate and demonstrate compliance with the demanding requirements of the AI Act.
AI Systems Designed to Operate in Critical Areas (Annex III)
The second category focuses on the intended use of the system. An AI is presumed to be high-risk if it is intended to operate in any of the following areas and specific use cases:
Critical Area | Specific Use Cases |
|---|---|
Biometric identification | "Real-time" and "post" remote biometric identification systems. |
Critical infrastructure | AI systems as safety components in traffic management and the supply of water, gas, heating, and electricity.Sistemas de IA como componentes de seguridad en la gestión del tráfico y el suministro de agua, gas, calefacción y electricidad. |
Education and training | Systems to determine access to educational institutions or to assess students. |
Employment and worker management | Systems for selecting candidates, assigning tasks, or evaluating performance. |
Access to essential services | Systems for assessing creditworthiness, eligibility for public benefits, or prioritizing emergency services. |
Law enforcement | Systems for assessing recidivism risk, polygraphs, evidence analysis, or crime prediction. |
Migration, asylum, and borders | Systems for assessing security risks, verifying travel documents, or assisting in asylum applications. |
Justice and democratic processes | Systems to assist judicial authorities or to influence voting behavior. |
Therefore, classification in this second category fundamentally depends on the purpose for which the AI system has been designed and marketed. If its use case is among those listed in Annex III, or if its nature is analogous to those described, the high-risk presumption is triggered. The assessment must consider whether the system, by its purpose and context, fits the spirit of the identified critical areas, which leads to the need to evaluate if the exception in Article 6 is applicable.
The Key Exception in Article 6: Relevance of the System's Influence
The regulation introduces a relevant exception. A system operating in the aforementioned critical areas might not be considered high-risk if its output is purely accessory to the corresponding action and, therefore, does not substantially influence its outcome.
The relevance analysis is key: if the AI system only performs a preparatory task and the final decision depends on a human who can easily verify or dismiss the recommendation, the system may not be classified as high-risk.
If a system meets the criteria of the first category, or those of the second without this exception applying, it is considered high-risk and must face a series of engineering challenges.
Mandatory Requirements for Providers of High-Risk AI Systems
Once a system is classified as high-risk, providers must demonstrate diligence throughout its lifecycle, which translates into specific technical and governance requirements:
Implement a risk management system (Article 9) for the continuous identification, evaluation, and mitigation of the model's potential hazards.
Ensure data quality and governance (Article 10), ensuring that training, validation, and testing datasets are representative, free of biases, and their provenance is documented.
Create and maintain technical documentation (Article 11) that is comprehensive enough to allow a third party to understand the system's architecture, the algorithms used, and its limitations.
Design the system to generate automatic logs (Article 12) to ensure the traceability of its decisions and allow for incident investigation.
Ensure transparency (Article 13), providing end-users with the necessary information to understand that they are interacting with an AI system.
Incorporate human oversight mechanisms (Article 14) that allow a person to monitor, intervene, or stop the system if necessary.
Ensure technical robustness, accuracy, and cybersecurity (Article 15) of the models against errors and external attacks.
Facing these requirements without a proper framework represents a considerable consumption of engineering resources, diverting the team's focus from the core innovation of the product.
The Solution: Venturalítica, a Strategic Platform for Regulatory Compliance
Venturalítica offers a SaaS platform designed to translate regulatory complexity into a working framework for development teams, integrating compliance into the software lifecycle.
Our platform provides the following capabilities:
Assistance in Risk Classification: The distinction between "substantial influence" and "accessory" is complex. Our intelligent assistant guides teams through this impact analysis for a correct system classification from the initial phases.
Comprehensive Requirements Management: Venturalítica provides a framework and tools to implement each of the EU AI Act's obligations, from risk management to the generation of technical documentation.
Optimization of "Time to Compliance": Our key metric is reducing the time needed to achieve compliance. We transform a process that could take months of consulting into an agile and managed workflow, freeing up development resources..
Navigate with confidence in the regulated AI environment. Use the regulation as a guide and let Venturalítica be your strategic ally.
Request a demo to learn how Venturalítica can integrate into your development cycle and ensure your system's compliance with the EU AI Act.
In the next post in this series, we will delve into Article 9, concerning risk management systems.