EU AI Act: Overview of the risk categories & Go-to measures!

In the following you will learn more details about how the EU AI Act classifies risk and how organization can best prepare for the AI regulation and to create AI governance structures in parallel. In addition to this information, you will find a special AI Act offer and a checklist to download.

EU AI Act

A regulatory framework for trustworthy AI WITHIN THE EU

What is the EU AI Act?

The EU AI Act establishes a regulatory framework for lawful, ethical, and robust AI. As the first of its kind globally, it aims to promote trustworthy AI applications within the EU while mitigating potential technological risks. However, the resulting requirements pose new challenges for businesses striving to ensure compliance and maintain competitiveness. However the AI Act presents both opportunities and challenges in shaping and utilizing AI systems. Find more info the Q&A.

A risk-based approach regulates AI in Europe.

At the heart of the EU AI law is a risk-based approach. This ensures that higher-risk applications, such as those impacting critical infrastructure or fundamental rights, are subject to stricter controls and compliance requirements, which promotes trust in AI technologies and their safety. The risk-based approach takes into account variables such as role, the area of application or model, other EU regulations, licence rights, certain exceptions and special features.

The EU AI Act has been in force since 1st August 2024 and sets out a staggered timetable for enforcement. The first requirements must be met by 2nd February 2025.

#Get EU AI Act ready

Special Offer:

AI Inventory
+ AI Act risk assessment

  • =Structured collection of all your AI use cases in Casebase
  • =Systematic assessment of your entire AI inventory in accordance with the EU AI Act
  • =Identification of regulatory requirements for each use case

from € 14,000 ,-  incl. Casebase lincence.

Regulated Areas of the EU AI Act

On the one hand, the AI Act regulates certain risky areas of application in which an AI systems can be used. On the other hand, the EU regulation regulates the placing on the market and commissioning of certain general-purpose models. In each case, the AI Act defines categorisation criteria in order to be able to carry out a risk classification. The challenge is to decipher the complexity of the regulation and derive the specific risk class and compliance requirements.

Prohibited Systems

… means that these AI systems pose an unacceptable risk and may not be operated or distributed on the European market. e.g. biometric identification in real time. (Titel II, article 5).

High Risk Systems

The identification of such AI systems is based on clearly defined areas of application in accordance with Article 6.

  • High-risk AI-systems are, on the one hand, an AI-system that is considered a product safety component or a stand-alone product that is regulated in accordance with an EU regulation listed in Annex I and must also undergo a conformity assessment by a third party as part of this regulation.
  • On the other hand, AI systems are considered high-risk if their intended purpose corresponds to a use case from Annex III, such as use cases for critical infrastructures or the creditworthiness assessment of natural persons.

Although these systems are permitted, they must undergo various conformity requirements and ex-ante conformity checks, as well as post-market surveillance (Title III). Irrespective of this classification, they may also contain transparency obligations. They can also be based on GPAI models, but are regulated independently.

Limited Risk Systems

Since exceptions describe the rule, an high-risk system according to Annex III is not considered high-risk insofar as it does not significantly influence the outcome of a decision-making process – regardless of whether it involves human or automated decisions. The AI Act defines various conditions for this (Art. 6 (3)). Irrespective of this, they may also contain transparency obligations.

Minimal Risk Systems

Permitted but subject to information and transparency obligations are e.g. chatbot use cases or image manipulations. These systems are part of the minimal risk class (Titel IV, article 52).

Not regulated systems

For AI applications, such as predictive maintenance, no specific compliance requirements or obligations apply. In general, however, voluntary compliance with the Code of Conduct is recommended (Titel IX, article 69)

GPAI Model with systematic risk

This category focuses on GPAI models that are expected to have a significant impact on the European internal market because their capabilities can be categorised as particularly high or have a relatively wide range of use (Art. 51).

GPAI Model without systematic risk

Even if a GPAI model does not meet the conditions of a systematic risk, it is regulated by the AI Act. For example, certain transparency obligations must be complied with (Art. 51).

EU AI Act Risk Classification
EU AI Act Risk Classification
Prohibited Systems

… means that these AI systems pose an unacceptable risk and may not be operated or distributed on the European market. e.g. biometric identification in real time. (Titel II, article 5).

High Risk Systems

The identification of such AI systems is based on clearly defined areas of application in accordance with Article 6.

  • High-risk AI-systems are, on the one hand, an AI-system that is considered a product safety component or a stand-alone product that is regulated in accordance with an EU regulation listed in Annex I and must also undergo a conformity assessment by a third party as part of this regulation.
  • On the other hand, AI systems are considered high-risk if their intended purpose corresponds to a use case from Annex III, such as use cases for critical infrastructures or the creditworthiness assessment of natural persons.

Although these systems are permitted, they must undergo various conformity requirements and ex-ante conformity checks, as well as post-market surveillance (Title III). Irrespective of this classification, they may also contain transparency obligations. They can also be based on GPAI models, but are regulated independently.

Limited Risk Systems

Since exceptions describe the rule, an High risk system according to Annex III is not considered high-risk insofar as it does not significantly influence the outcome of a decision-making process – regardless of whether it involves human or automated decisions. The AI Act defines various conditions for this (Art. 6 (3)) Irrespective of this, they may also contain transparency obligations.

Minimal Risk Systems

Permitted but subject to information and transparency obligations are e.g. chatbot use cases or image manipulations. These systems are part of the minimal risk class (Titel IV, article 52).

Not regulated systems

For AI applications, such as predictive maintenance, no specific compliance requirements or obligations apply. In general, however, voluntary compliance with the Code of Conduct is recommended (Titel IX, article 69)

GPAI Model with systematic risk

This category focuses on GPAI models that are expected to have a significant impact on the European internal market because their capabilities can be categorised as particularly high or have a relatively wide range of use (Art. 51).

GPAI Model without systematic risk

Even if a GPAI model does not meet the conditions of a systematic risk, it is regulated by the AI Act. For example, certain transparency obligations must be complied with (Art. 51).

Are you interested in determining the AI Act risk classes for your use cases?

How you get prepared

What businesses should do

As the AI Act shapes the legal framework to foster trust, the challenge now shifts to how organizations can effectively implement and manage these changes. The AI use case is moving to the center of the risk assessment and governance will be the key to getting AI Act compliant. Therefore sustainable portfolio management of AI use cases is the core to get AI Act-ready and build AI governance structures. 

Download here your checklist with 10 points you should consider when introducing AI governance and preparing for the AI Act. 

Ideation AI Assisstent
Use Case inventory library

Build and maintain a portfolio of AI and ML use cases

Feature/Casebase/Custom Flow
Governance along your AI & ML life cycle

Definition of quality gates and requirements for design and operations based on compliance obligations and trustworthy AI principles.

Feature/casebase/Members

Management of responsibilities

Clarify responsibilities and make them transparent.

Feature/Casebase/Notification

Comprehensive risk management

Identify and mitigate risk potentials systematically and always keep the overview.

Feature/Casebase/Historization

Auditable reporting and traceability

Make sure that information about your AI initiatives is quick and easy to find as well as understandable for stakeholders.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Dive deeper into the topic. In the webinar, with our dear [at] colleagues, you will find more insights on the AI Act, how to implement it in practice, and how Casebase can support you.

How you get prepared

What businesses should do

As the AI Act shapes the legal framework to foster trust, the challenge now shifts to how organizations can effectively implement and manage these changes. The AI use case is moving to the center of the risk assessment and case management will be the key to getting AI Act compliant. Therefor A sustainable portfolio management is the basis to get AI Act-ready and build AI governance structures. 

Ideation AI Assisstent

Use Case inventory library

Build and maintain a portfolio of AI and ML use cases

Feature/Casebase/Custom Flow

Governance along your AI & ML life cycle

Definition of quality gates and requirements for design and operations based on compliance obligations and trustworthy AI principles.

Feature/casebase/Members

Management of responsibilities

Clarify responsibilities and make them transparent.

Feature/Casebase/Notification

Comprehensive risk management

Identify and mitigate risk potentials systematically and always keep the overview.

Feature/Casebase/Historization

Auditable reporting and traceability

Make sure that information about your AI initiatives is quick and easy to find as well as understandable for stakeholders.

Download here your checklist with 10 points you should consider when introducing AI governance and preparing for the AI Act. 

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Dive deeper into the topic. In the webinar, with our dear [at] colleagues, you will find more insights on the AI Act, how to implement it in practice, and how Casebase can support you.

AI Act in a nutshell

What´s behind the EU AI Act?

The EU AI Act is a pioneering piece of legislation , developed with insights from the High-Level Expert Group on Artificial Intelligence. This group’s recommendations form the foundation of the act, ensuring that AI technologies are lawful, ethical, and robustly designed and operated. These are the principles of trustworthy AI. After all, data and AI can only develop their full potential if users have confidence in the use of the applications. This is why the regulatory focus is on use cases in particular.

What is defined as AI?

AI is defined in a very comprehensive and technology-agnostic way in order to be able to include future developments, such as in the context of foundation models. In addition, the definition is based on the OECD definition in order to enable a certain interoperability between different legal systems.

Definition of AI systems in the AI Act
An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (Title I, article 2, 5g (1))

Who is affected?

The harmonization regulations apply across all sectors. However, critical digital infrastructure, like road traffic and the supply of water, gas, heating, and electricity are particularly affected (Annex III). Generally affected are system providers [source], distributors & importers [intermediates], and deployers [user entity] of already active or future AI systems that are used within the EU.

What is the timeline?

This is the timetable for the EU AI Act on the way to its implementation.

21. April 2021 EU Commission proposes the AI Act
06. December 2022 EU Council unanimously adopts the general approach of the law
09. December 2023 European Parliament negotiators and the Council Presidency agree on the final version
02. February 2024 EU Council of Ministers unanimously approves the draft law on the EU AI Act
13. February 2024 parliamentary committees approve the draft law
13. March 2024 EU Parliament approves the draft law
12. July 2024 Publication of the law in the Official Journal of the European Union
02. August 2024 Entry into force of the law
02. February 2025 Ban on AI systems with unacceptable risk and provisions related to AI literacy
02. August 2025 Governance rules and obligations for General Purpose AI (GPAI) become applicable
02. August 2026 Start of application of the EU AI Act for AI systems (including High risk system accoring to Annex III)
02. August 2027 Application of the entire EU AI Act for all risk categories (High risk system accoring to Annex I)

Updated 02/08/2024

What about GPAI/foundation models?

General purpose AI (foundation models) are considered separately. General-purpose AI system’ means an AI system which is based on a general purpose AI model , that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems (Title I, article 2).

In general, providers of GPAI systems are subject to a separate transparency obligation and must therefore pass on information (e.g. technical documentation, compliance with EU copyright law) to downstream operators (Title IV A; article 52c).

If a system can also be classified as high impact (Title IV A; articles 52a & d), additional obligations are necessary.

– Model evaluations
– Assessment and risk mitigation of systemic risks
– Report to the Commission on serious incidents
– Adversarial tests
– Report on energy efficiency
– Cybersecurity

What fines are expected?

Up to 7% of global annual turnover or €35m for prohibited Al violations.  

Up to 3% of global annual turnover or €15m for most other violations.

Up to 1.5% of global annual turnover or €7.5m for supplying incorrect info Caps on fines for SMEs and startups.

What additional efforts are required?

As an organization building or using AI systems, you will be responsible for ensuring compliance with the EU AI Act and should use this time to prepare.

 

In turn, compliance with the regulations causes additional costs, which can be broken down into

  • One-off costs for establishing compliance (e.g. governance strategy and a quality and risk management program)
  • Recurring costs for the implementation of compliance (e.g. risk assessment, risk mitigation, audits, documentation, etc.)

Compliance obligations will be dependent on by the level of risk an AI system. Most requirements will be on AI systems being classified as “high risk”, and GPAI determined to be high-impact posing “systemic risks”.

Depending on the risk threshold of your systems, some of your responsibilities could include (Title III):

1. Conducting a risk assessment

2) Providing conformity assessment

4) Maintaining appropriate technical documentation and record-keeping.

5) Providing transparency and disclosure about your AI system

How Casebase Supports YOU

Understanding the enormity of this task and its significance in the future of an AI lifecycle, we’re here to guide you through this journey with Casebase. In addition to a systematically documented library of your portfolio inventory, or the central AI & ML Lifecycle Management, you can look forward to further AI Act special features.

Feature/Casebase/AIAct

AI Act Risk Check

Get ready for the new regulation. Identify and classify your portfolio by risk groups.

Feature/Casebase/EU AI Act Assistent

EU AI ACT Assistent

This smart chat bot supports you with all questions about the EU AI Act.

Feature/Casebase/Qulaity Checklist

Quality Gate Checklist

Ensure requirements and compliance standards are met to drive high-quality use cases.

–> See further Casebase features.

Get our free Casebase trail

Free Trial, fast Onboarding

Casebase Tool Training

Training & support included

Case is a secure & GDPR complinat

Secure & GDPR compliant

figure out how the EU AI Act will impact your AI PORTFOLIO.