Schalast | The AI regulation

The Artificial Intelligence Act (the AI Act) was adopted by the Council of the European Union on 21 May 2024, marking the world’s first comprehensive legislative measure to regulate artificial intelligence. The AI Act aims to establish a standardised legal framework for both the development and utilisation of AI technologies within the EU. Similar to its pioneering approach with the regulation of crypto assets under MiCAR, the EU is positioning itself at the forefront of AI regulation.

Following approval by the European Parliament on 13 March 2024, the final text has now been adopted by the Council. The Act will enter into force 20 days after its publication in the Official Journal of the EU, which is expected by the end of June. The new regulations will take effect from spring 2026, owing to a complex system of transitional periods.

Given that high-risk AI systems will be banned just six months after the Act enters into force and stringent compliance requirements will apply to other AI systems, it is crucial for market participants and companies planning to enter the AI sector to familiarise themselves with the new regulations early on.

Intensive discussions

The legislative process for the AI Act was marked by intensive discussions, resulting in numerous amendments. These changes addressed various aspects, including the definition of AI systems and the scope of high-risk obligations. The substantial “last-minute” modifications, beyond a corrigendum for linguistic and editorial refinements, highlight the extensive deliberation devoted to individual points.

Definition of AI

Unsurprisingly, the material scope of the AI Act was one of the most contentious aspects of the entire legislative process. Ultimately, a definition based on the OECD standard was adopted, allowing for an international connection.

The text now reads: “[‘AI system’ means] a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptive es after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

During the legislative process, there were arguments that “pure” software without typical AI elements could also fall under the scope of the AI Act. However, Recital 12 of the AI Act specifies that a key characteristic of AI systems is their capability to “infer”, which seems to contradict this inclusion. It also clarifies that the definition of AI should not cover software systems that are based on rules defined solely by natural persons to automatically execute operations. Nonetheless, practical implementation will present tricky demarcation issues, particularly with deep learning models.

Personal scope of application

The personal scope of application of the AI Act includes both providers and deployers of AI systems. Providers, as defined by the AI Act, are the primary targets of the new regulations. This category not only includes developers and manufacturers but also those who commission the production of AI and subsequently market it under their own name. Thus, companies that commission third-party development and then operate the AI fall under the definition of providers according to the AI Act.

However, the AI Act does not cover the purely private use of AI systems outside of professional activities. This exclusion means that private use of tools like ChatGPT remains outside the Act’s personal scope of application.

Risk-based regulatory approach - four risk levels

The legislator has adopted a “risk-based approach” for the AI Act. Under this approach, AI systems are evaluated based on their potential risk to people’s safety, health and fundamental rights. The AI Act categorises these risks into four levels. The greater the risk, the higher the assigned risk level, and correspondingly, the more stringent the compliance obligations that must be met.

The following four types of AI are governed in detail:

  1. Prohibited AI practices (Article 5 AI Act),
  2. High-risk AI systems (Article 6 AI Act),
  3. AI systems with low risk (Article 52 AI Act) and
  4. AI systems with minimal risk (Article 69 AI Act).

Prohibited practices

AI technologies with absolutely unacceptable risks are entirely prohibited under an “ultima ratio approach”. This includes, for instance, social scoring systems. Additionally, AI systems designed for subliminal influence that operate outside of a person’s awareness are banned if their significant influence is intended to cause physical or psychological harm to that person or others. However, proving that an AI system is deliberately aimed at manipulating or influencing consciousness will be challenging in practice and will pose considerable difficulties for prosecutors.

High-risk AI systems

Furthermore, the use of “high-risk AI systems”, particularly in areas that are sensitive to security or fundamental rights, is subject to specific provisions. These systems lack a precise legal definition; instead, the legislator employs a dynamic system with normative points of reference outlined in Articles 6 et seq. AI Act.

A high-risk system is defined as

  1. a system that is used as a safety component for another product or
  2. where it is a product subject to specific EU rules listed in Annex II of the AI Act.

The following compliance obligations apply to high-risk systems and should be emphasised:

  • establishing a risk-management system and implementing market surveillance measures throughout the AI system’s lifecycle
  • implementing comprehensive data governance to avoid bias and achieve representative results
  • creating technical documentation and instructions for the intended purpose and proper use of the AI system
  • introducing human supervision of the AI, along with automatic logging and error analysis
  • implementing measures to ensure the AI system’s cybersecurity
  • performing a conformity assessment and obtaining the EU Declaration of Conformity
  • registering certain high-risk AI systems in an EU database.

AI systems with low risk

Specific additional transparency obligations apply to AI systems with limited risk. Notably, these obligations require a label for certain AI-generated content, informing the user that they are interacting with an AI. This is relevant for chatbots, for instance. Additionally, artificial deep fakes and audio or video files must be clearly identified as being created by an AI.

AI systems with minimal risk

For other AI systems, adherence to a voluntary code of conduct is encouraged. According to EU regulators, this measure aims to bolster general societal trust in AI applications.

Noteworthy modifications to the text of the AI Act, made prior to its adoption by the Council of the European Union, include the following significant points:

- Open source AI systems

Open source AI systems are now, contrary to the original plan in the legislative process which faced substantial criticism, generally excluded from the scope of the AI Act. The only exceptions are for high-risk AI systems or prohibited AI practices.

- Responsibility of the AI Office

The role of the European Artificial Intelligence Office (AI Office), particularly in supervising general-purpose AI, has been clarified through an expanded definition. Additionally, the European Data Protection Supervisor has been designated as the market surveillance authority as regards the use of AI by EU institutions.

- AI practice of subliminal influence

As previously mentioned, subliminal influencing of human users through AI is generally prohibited. The current requirement states that it it necessary and sufficient for it to be reasonably likely that significant harm is caused to persons. Originally, it was required that the person or group of persons would actually or probably suffer significant harm.

- Presumption of conformity

In the final version, the presumption of conformity now also applies to general-purpose AI when it complies with harmonised standards. Initially, this was only intended for high-risk AI systems during the legislative process but has now been expanded.

- The scope of application of high-risk AI obligations

Finally, several changes have been made regarding high-risk AI systems to clarify the scope of the associated obligations.

For instance, the use of AI-based lie detectors or comparable tools in migration and border controls now explicitly falls under stricter requirements. These stringent standards also extend to third parties operating on behalf of public authorities or EU institutions.

Entry into force and applicability

According to general provisions, the AI Act will enter into force 20 days after its publication in the Official Journal of the EU, which is anticipated by the end of June.

It is important to distinguish between the Act’s entry into force and its applicability. The AI Act includes a complex system of staggered transitional rules:

  • six months after entry into force, the regulations on prohibited AI systems will apply, and their use must be terminated.
  • twenty-four months after entry into force, the remaining provisions of the AI Act will become applicable.
  • Exceptions apply to certain obligations, such as those related to high-risk AI systems. For these, an extended transition period of 36 months after the AI Act’s entry into force is provided.

Sanctions

Market participants face severe sanctions for violations of the AI Act, with administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover. Additionally, there is the risk of legal action from competitors. Those affected by a breach of the AI Act can also assert civil claims for damages.