Article 11 of the EU AI Act requires providers of high-risk AI systems to draw up technical documentation in accordance with Annex IV before placing the system on the market. This is where most companies get stuck.
Technical documentation under Annex IV is not a checkbox exercise. It requires detailed descriptions of your AI system's design, development process, training data, testing methodology, and monitoring capabilities. For US companies unfamiliar with EU regulatory documentation standards, this is typically the most time-consuming compliance step.
The documentation must be kept up to date throughout the system's lifecycle and made available to national competent authorities upon request for a period of 10 years after the system has been placed on the market (Article 18).
Its intended purpose, the provider's name and address, how the system interacts with hardware or software that is not part of the AI system itself, the versions of relevant software, and a description of the forms in which the AI system is placed on the market.
The methods and steps for development, design specifications, system architecture, computational resources used, data requirements, and the general logic of the AI system. For systems involving machine learning: the learning approaches and techniques used, key design choices, and assumptions about the data.
Capabilities and limitations of the system, degrees of accuracy, performance metrics including foreseeable unintended outcomes and sources of risks, human oversight measures pursuant to Article 14, and specifications for input data.
Documentation of the risk management system implemented in accordance with Article 9, including identification of known and foreseeable risks, estimation and evaluation of risks, risk mitigation measures, and residual risk assessment.
Description of training, validation, and testing data sets: their main characteristics, how data was obtained and selected, labelling procedures, data cleaning methodologies, and the examination in view of possible biases. For systems not using training data: details on the input data and logic specifications.
Where applicable: the methods and techniques used for training the model, key design choices, assumptions, computing infrastructure used, and information about the training data used for fine-tuning.
Information on the testing procedures and results, including the metrics used, the testing data sets, and a description of the testing methodology and results with respect to the requirements set out in Chapter III, Section 2 of the AI Act.
A description of the cybersecurity measures put in place in accordance with Article 15 — resilience, accuracy, robustness, and appropriate measures to address vulnerabilities.
Information about the conformity assessment procedure carried out, and a copy of the EU declaration of conformity (Article 47). For systems subject to third-party conformity assessment: a copy of the certificate issued by the Notified Body.
Common mistake: Many US companies attempt to use existing SOC 2 or ISO 27001 documentation as a substitute for Annex IV. These frameworks overlap partially with cybersecurity requirements, but they do not satisfy the specific AI-related documentation requirements — particularly around training data governance, bias assessment, and human oversight specifications.
Annex III Classification — determine which of your systems require this documentation
Conformity Assessment — what happens after documentation is complete
August 2026 Deadline — timeline for completing all requirements
Lexara Advisory guides US companies through every step — from classification to database submission.
Contact Lexara Advisory →Lexara Advisory LLC is an AI governance consulting firm, not a law firm. This content is for informational purposes only and does not constitute legal advice.
🤖 AI — not a human or lawyer