Skip to Content

High-Risk AI Systems under the EU AI Act

A Guide to Classification and Requirements
17 October 2025 by
High-Risk AI Systems under the EU AI Act
Rodrigo Cilla Ugarte
| No comments yet

The European Artificial Intelligence Act (EU AI Act) establishes a regulatory framework for the marketing and deployment of AI systems in the European Union. A central element of this regulation is the "high-risk" category, designated for systems with the potential to negatively affect the health, safety, or fundamental rights of individuals.

Understanding whether a system falls into this category is crucial, as it triggers a series of mandatory technical and governance requirements. To this end, Article 6 of the EU AI Act establishes two main classification pathways: 

  1. AI systems that are safety components in products already subject to EU legislation, as detailed in Annex I.

  2. AI systems that operate in critical areas specifically listed in Annex III of the Regulation.

Below, we explore each of these categories in detail and the requirements they entail.

It is important to highlight a fundamental distinction in the impact this classification has on development processes. For providers of AI systems as safety components in already regulated products, operating in sectors with strict regulation, adapting to the EU AI Act will, in many cases, involve extending their existing Quality Management Systems (QMS) to cover the new specific AI requirements.

Conversely, for developers of AI systems in critical areas, who may not have a formalized QMS, implementing the requirements from scratch represents a more significant challenge, with a deeper impact on their methodologies and development lifecycles.

AI Systems as Safety Components in Regulated Products (Annex I)

The first category, detailed in Annex I of the Regulation, refers to AI systems that function as safety components within products already subject to strict Union harmonization legislation. If an AI software is integrated into any of the following domains, it is classified as high-risk by default:

Product Domain

Specific Regulation

Scope of Application

Machinery and Robotics

Directive 2006/42/EC

Machinery and safety components.


Directive 2014/33/EU

Lifts and safety components for lifts.


Directive 2014/53/EU

Radioelectrical equipment.

Road Transport

Regulation (EU) 2018/858

Approval of motor vehicles and their trailers.


Regulation (EU) nº 168/2013

Two- or three-wheel vehicles and quadricycles. 


Regulation (EU) nº 167/2013

Agricultural and forestry vehicles. 


Regulation (EU) 2019/2144

General safety of motor vehicles and protection of occupants. 

Rail Transport 

Directive (EU) 2016/797

Interoperability of the rail system. 

Air Transport 

Regulation (EU) 2018/1139

Common rules in civil aviation (includes unmanned aircraft). 


Regulation (EC) nº 300/2008

Common rules for civil aviation security.

Maritime and River Transport

Directive 2014/90/EU

Marine equipment. 


Directive 2013/53/EU

Recreational craft and personal watercraft. 

Cableway Transport

Regulation (EU) 2016/424

Cableway installations.

Healthcare 

Regulation (EU) 2017/745

Medical devices. 


Regulation (EU) 2017/746

In vitro diagnostic medical devices. 

Toys

Directive 2009/48/EC

Safety of toys. 

Personal Protective Equipment 

Regulation (EU) 2016/425

Personal protective equipment (PPE). 

Industrial Components

Regulation (EU) 2016/426

Gas appliances.


Directive 2014/68/EU

Pressure equipment. 


Directive 2014/34/EU

Equipment for potentially explosive atmospheres (ATEX).

In summary, if an AI system is developed with the intention of being used as a safety component in any of the listed products, or if the AI system itself is a product subject to such legislation, it is automatically classified as high-risk. This obliges developers to integrate the EU AI Act's requirements into the existing compliance frameworks for these products. Thus, the vast majority of medical AI products—from diagnostic imaging analysis software to clinical decision support systems—already covered by the MDR or IVDR regulations would fall into this category. For their developers, this means that the CE marking and certification process for their products must now incorporate and demonstrate compliance with the demanding requirements of the AI Act.

AI Systems Designed to Operate in Critical Areas (Annex III)

The second category focuses on the intended use of the system. An AI is presumed to be high-risk if it is intended to operate in any of the following areas and specific use cases:

Critical Area

Specific Use Cases

Biometric identification

"Real-time" and "post" remote biometric identification systems.

Critical infrastructure

AI systems as safety components in traffic management and the supply of water, gas, heating, and electricity.Sistemas de IA como componentes de seguridad en la gestión del tráfico y el suministro de agua, gas, calefacción y electricidad.

Education and training

Systems to determine access to educational institutions or to assess students.

Employment and worker management

Systems for selecting candidates, assigning tasks, or evaluating performance.

Access to essential services

Systems for assessing creditworthiness, eligibility for public benefits, or prioritizing emergency services.

Law enforcement

Systems for assessing recidivism risk, polygraphs, evidence analysis, or crime prediction.

Migration, asylum, and borders

Systems for assessing security risks, verifying travel documents, or assisting in asylum applications.

Justice and democratic processes

Systems to assist judicial authorities or to influence voting behavior.

Therefore, classification in this second category fundamentally depends on the purpose for which the AI system has been designed and marketed. If its use case is among those listed in Annex III, or if its nature is analogous to those described, the high-risk presumption is triggered. The assessment must consider whether the system, by its purpose and context, fits the spirit of the identified critical areas, which leads to the need to evaluate if the exception in Article 6 is applicable.


The Key Exception in Article 6: Relevance of the System's Influence

The regulation introduces a relevant exception. A system operating in the aforementioned critical areas might not be considered high-risk if its output is purely accessory to the corresponding action and, therefore, does not substantially influence its outcome.

The relevance analysis is key: if the AI system only performs a preparatory task and the final decision depends on a human who can easily verify or dismiss the recommendation, the system may not be classified as high-risk.

If a system meets the criteria of the first category, or those of the second without this exception applying, it is considered high-risk and must face a series of engineering challenges.

Mandatory Requirements for Providers of High-Risk AI Systems

Once a system is classified as high-risk, providers must demonstrate diligence throughout its lifecycle, which translates into specific technical and governance requirements:

Facing these requirements without a proper framework represents a considerable consumption of engineering resources, diverting the team's focus from the core innovation of the product.

The Solution: Venturalítica, a Strategic Platform for Regulatory Compliance

Venturalítica offers a SaaS platform designed to translate regulatory complexity into a working framework for development teams, integrating compliance into the software lifecycle.

Our platform provides the following capabilities:

  • Assistance in Risk Classification: The distinction between "substantial influence" and "accessory" is complex. Our intelligent assistant guides teams through this impact analysis for a correct system classification from the initial phases.

  • Comprehensive Requirements Management: Venturalítica provides a framework and tools to implement each of the EU AI Act's obligations, from risk management to the generation of technical documentation.

  • Optimization of "Time to Compliance": Our key metric is reducing the time needed to achieve compliance. We transform a process that could take months of consulting into an agile and managed workflow, freeing up development resources..

Navigate with confidence in the regulated AI environment. Use the regulation as a guide and let Venturalítica be your strategic ally.

Request a demo to learn how Venturalítica can integrate into your development cycle and ensure your system's compliance with the EU AI Act.

In the next post in this series, we will delve into Article 9, concerning risk management systems.

High-Risk AI Systems under the EU AI Act
Rodrigo Cilla Ugarte 17 October 2025
Share this post
Tags
Archive
Sign in to leave a comment