AI Act. Overview and context

AI Act. Context and regulation
The AI Act is the European 🇪🇺 regulation that governs the development, placing on the market, and use of artificial intelligence systems. It is the first law in the world that aims to regulate AI in a comprehensive way. Its ambition is twofold: to protect fundamental rights, people’s safety, and democratic functioning, while fostering innovation and trust in AI technologies. The European Union does not seek to slow down AI, but to prevent its dangerous uses. To do so, it adopts a risk-based approach. The greater the potential impact of a system on people’s lives, the stricter the rules. The regulation applies not only to European companies, but also to actors outside the EU as soon as their AI systems are used within European territory, giving it extraterritorial reach comparable to the GDPR.
The core of the text is based on a classification of AI systems into four levels of risk. The first level corresponds to so-called “unacceptable” risks. Certain uses are considered incompatible with European values and are therefore prohibited. This includes, in particular, social scoring of individuals by public authorities, systems that exploit vulnerabilities of certain people (such as children or people with disabilities) to manipulate their behavior, or subliminal manipulation techniques that cause harm. Real-time remote biometric identification in public spaces, such as mass facial recognition, is in principle prohibited, with very narrowly defined exceptions for law enforcement. The idea is clear: certain ethical red lines must not be crossed.
The second level concerns “high-risk” systems, which make up the most important part of the regulation. These systems are not prohibited, but they are heavily regulated because they can significantly affect people’s safety or rights. This includes areas such as recruitment (resume screening or automated candidate evaluation), education (systems influencing orientation or grading), access to essential services such as credit or social benefits, health, critical infrastructure such as energy or transport, as well as certain uses in justice, law enforcement, or border management. Providers of these systems must establish a genuine risk management system throughout the product life cycle. They must ensure the quality of the data used to train models, to limit bias, produce detailed technical documentation, ensure traceability through records enabling audits, and provide a sufficient level of transparency. AI must not operate without human oversight, and the system must be robust, accurate, and protected against cyberattacks. Before being placed on the market, some of these systems must undergo a conformity assessment, comparable to CE marking for other regulated products.
The third level corresponds to limited risks. Here, the danger is lower, but transparency is essential. Users must know they are interacting with a machine. This covers, for example, chatbots or systems that generate images, videos, or artificial voices, such as deepfakes. The obligations mainly concern informing people to avoid deception. The fourth level, minimal risk, covers the vast majority of AI systems, such as spam filters or movie recommendations. These uses are not subject to heavy constraints, even if the EU encourages the adoption of voluntary codes of conduct.
The AI Act also addresses general-purpose AI models, such as large language models that can be integrated into many applications. Their providers must provide technical documentation, summarize training data while taking copyright compliance into account, and put risk management policies in place. For the most powerful models, considered to present “systemic risk” because of their potential large-scale impact, obligations are strengthened: extensive testing, assessment of overall risks, increased cybersecurity measures, and reporting of serious incidents.
The regulation also provides a governance system. Each Member State must designate national authorities responsible for supervision, while a European AI Office ensures coordination at the Union level. Certain high-risk systems must be registered in a European database, and companies must cooperate with authorities in case of inspection or investigation.
The penalties are significant and recall those of the GDPR. The most serious violations, including the use of prohibited practices, can result in fines of up to 35 million euros or 7% of annual worldwide turnover. Other breaches also carry high penalties, which shows the EU’s intent to ensure effective enforcement of the text.
Beyond Europe, the AI Act could have a global impact. As with the GDPR, many international companies may apply these rules globally rather than manage different standards by region. This is sometimes called the “Brussels effect,” through which European regulation becomes a de facto international standard. The goal is to strike a balance: avoid uncontrolled AI development without blocking innovation, in order to promote trustworthy, human-centered artificial intelligence.
In summary
The AI Act classifies AI systems by risk level, bans certain uses deemed unacceptable, imposes strict obligations on high-risk systems, provides specific rules for general and generative AI models, sets up a European oversight structure and heavy penalties, and seeks to shape a framework in which AI remains innovative while respecting people’s rights and safety. It is, in a way, the counterpart of the GDPR for artificial intelligence.