Annex III of the EU AI Act defines eight areas where AI systems are automatically classified as high-risk. If your system falls into any of these categories, registration under Article 71 is mandatory before August 2, 2026.
Annex III enumerates eight specific areas of use. Within each area, specific system types carry high-risk designation. The European AI Office estimates between 6,000 and 8,000 high-risk AI systems already operate across EU Member States.
Remote biometric identification systems (both real-time and post-processing), biometric categorisation systems that infer sensitive attributes such as race, political opinions, or trade union membership, and emotion recognition systems in workplaces and education settings. Real-time biometric identification for law enforcement has a separate regime under Article 5 (prohibited practices).
US company example: A workforce management platform that uses facial recognition for employee time tracking, deployed by a European subsidiary or available to EU-based clients.
AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity. A 2024 ENISA report found that 38% of critical infrastructure operators in the EU were already using AI for operational monitoring.
US company example: An AI-powered grid management system sold to European energy utilities, or a traffic flow optimization system deployed in EU cities.
AI systems intended to determine access to or admission into educational and vocational training institutions, to evaluate learning outcomes, to assess the appropriate level of education for an individual, or to monitor and detect prohibited behaviour of students during tests.
US company example: An AI-powered admissions screening tool used by a European university, or an automated proctoring system deployed during EU-administered examinations.
AI systems intended for recruitment and selection, for making decisions affecting terms of work-related relationships, for task allocation based on individual behaviour or personal traits, and for monitoring and evaluating performance and behaviour of persons in such relationships.
US company example: An AI hiring tool (resume screening, video interview analysis) used by a company with EU employees or applicants — the most common trigger for US companies under Annex III.
AI systems used to evaluate creditworthiness or establish credit scores, to evaluate and classify emergency calls, to assess eligibility for public assistance benefits, and for risk assessment and pricing in life and health insurance.
US company example: A credit scoring algorithm deployed by a US fintech with European customers, or an insurance underwriting AI model applied to EU policyholders.
AI systems intended for use by law enforcement authorities or on their behalf for individual risk assessments, polygraphs and similar tools, evaluation of the reliability of evidence, prediction of occurrence or reoccurrence of criminal offences, profiling of natural persons, and crime analytics regarding natural persons.
AI systems used as polygraphs or similar tools, for assessing certain risks posed by natural persons entering EU territory, for assisting competent public authorities in examining applications for asylum, visa, or residence permits, and for detecting, recognising, or identifying natural persons in the context of migration, asylum, and border control management.
AI systems intended for use by judicial authorities or on their behalf to assist in researching and interpreting facts and the law, and AI systems intended for influencing the outcome of an election or referendum or the voting behaviour of natural persons.
Not every AI system in these eight areas is automatically high-risk. Article 6(3) provides an exception for systems that perform narrow procedural tasks, detect decision-making patterns without replacing human assessment, or perform preparatory tasks for assessments. However, any system that performs profiling of natural persons is always high-risk regardless of this exception.
Providers who believe their system qualifies for this exception must document their assessment before placing it on the market (Article 6(4)) and are still subject to registration obligations under Article 49(2).
Annex IV Technical Documentation — what you must document for each high-risk system
Conformity Assessment — self-assessment vs. Notified Body requirements
Registration Step-by-Step Guide — the complete process from inventory to submission
Lexara Advisory guides US companies through every step — from classification to database submission.
Contact Lexara Advisory →Lexara Advisory LLC is an AI governance consulting firm, not a law firm. This content is for informational purposes only and does not constitute legal advice.
🤖 AI — not a human or lawyer