Get EU AI Act ready!

The EU’s AI Act is just around the corner and sets a framework for lawful, ethical, and robust AI. On the other hand, it also introduces new challenges and efforts for companies leveraging AI technologies. Casebase supports you in setting the right course today to prepare your company for the AI Act and to create AI governance structures in parallel. Have a look at our checklist.

EU AI ACT

A regulatory framework for trustworthy AI WITHIN THE EU

What’s behind the AI Act?

The EU AI Act is a pioneering piece of legislation aimed at regulating artificial intelligence systems within the European Union, developed with insights from the High-Level Expert Group on Artificial Intelligence. This group’s recommendations form the foundation of the act, ensuring that AI technologies are lawful, ethical, and robustly designed and operated. These are the principles of trustworthy AI. After all, data and AI can only develop their full potential if users have confidence in the use of the applications. That is why the regulatory focus is not on technology but on use cases.

Risk-based approach

At the heart of the EU AI Act is a risk-based approach. This ensures that higher-risk applications, such as those impacting critical infrastructure or fundamental rights, undergo stricter controls and compliance requirements, promoting trust and safety in AI technologies.

Risk classes EU AI Act

Unacceptable risk means these AI systems are prohibited from being operated or distributed on the European market. e.g. real-time biometric identification (Titel II, article 5).

Classified as a high risk system, like use cases for critical infrastructure or credit scoring of natural persons are permitted, but have to undergo several compliance requirements and ex-ante conformity audits as well as post-market monitorings (Titel III).

Permitted but subject to information and transparency obligations are e.g. chatbot use cases or image manipulations. These systems are part of the limited risk class (Titel IV, article 52).

For low/minimal risk AI applications, such as predictive maintenance, no specific compliance requirements or obligations apply. In general, however, voluntary compliance with the Code of Conduct is recommended (Titel IX, article 69)

Interested in identifying the AI Act risk classes of your use cases?

AI Act in a nutshell

What is defined as AI?

AI is defined in a very comprehensive and technology-agnostic way in order to be able to include future developments, such as in the context of foundation models. In addition, the definition is based on the OECD definition in order to enable a certain interoperability between different legal systems.

Definition of AI systems in the AI Act
An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (Title I, article 2, 5g (1))

Who is affected?

The harmonization regulations apply across all sectors. However, critical digital infrastructure, like road traffic and the supply of water, gas, heating, and electricity are particularly affected (Annex III). Generally affected are system providers [source], distributors & importers [intermediates], and deployers [user entity] of already active or future AI systems that are used within the EU.

What is the timeline?

This is the timetable for the EU AI Act on the way to its implementation.

Date Milestone
21. April 2021 EU Commission proposes the AI Act
06. December 2022 EU Council unanimously adopts the general approach of the law
09. December 2023 European Parliament negotiators and the Council Presidency agree on the final version
02. February 2024 EU Council of Ministers unanimously approves the draft law on the EU AI Act
13. February 2024 parliamentary committees approve the draft law
13. March 2024 EU Parliament approves the draft law
20 days after its publication in the Journal Entry into force of the law
6 months after entry into force
(approx. end of 2024)
Ban on AI systems with unacceptable risk
9 months after entry into force
(approx. Q1/Q2 2025)
Codes of conduct are applied
12 months after entry into force
(approx. Q2 2025)
Governance rules and obligations for General Purpose AI (GPAI) become applicable
24 months after entry into force
(approx. Q2 2026)
Start of application of the EU AI Act for high risk AI systems under Annex III.
36 months after entry into force
(approx. Q3/Q4 2026)
Application of the entire EU AI Act for all risk categories including high-risk AI systems under Annex II.

Updated 13/03/2024

What about GPAI/foundation models?

General purpose AI (foundation models) are considered separately. General-purpose AI system’ means an AI system which is based on a general purpose AI model , that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems (Title I, article 2).

In general, providers of GPAI systems are subject to a separate transparency obligation and must therefore pass on information (e.g. technical documentation, compliance with EU copyright law) to downstream operators (Title IV A; article 52c).

If a system can also be classified as high impact (Title IV A; articles 52a & d), additional obligations are necessary.

– Model evaluations
– Assessment and risk mitigation of systemic risks
– Report to the Commission on serious incidents
– Adversarial tests
– Report on energy efficiency
– Cybersecurity

What fines are expected?

Up to 7% of global annual turnover or €35m for prohibited Al violations.  

Up to 3% of global annual turnover or €15m for most other violations.

Up to 1.5% of global annual turnover or €7.5m for supplying incorrect info Caps on fines for SMEs and startups.

What additional efforts are required?

As an organization building or using AI systems, you will be responsible for ensuring compliance with the EU AI Act and should use this time to prepare.

 

In turn, compliance with the regulations causes additional costs, which can be broken down into

  • One-off costs for establishing compliance (e.g. governance strategy and a quality and risk management program)
  • Recurring costs for the implementation of compliance (e.g. risk assessment, risk mitigation, audits, documentation, etc.)

Compliance obligations will be dependent on by the level of risk an AI system. Most requirements will be on AI systems being classified as “high risk”, and GPAI determined to be high-impact posing “systemic risks”.

Depending on the risk threshold of your systems, some of your responsibilities could include (Title III):

1. Conducting a risk assessment

2) Providing conformity assessment

4) Maintaining appropriate technical documentation and record-keeping.

5) Providing transparency and disclosure about your AI system

How you get prepared

What businesses should do

As the AI Act shapes the legal framework to foster trust, the challenge now shifts to how organizations can effectively implement and manage these changes. The AI use case is moving to the center of the risk assessment and case management will be the key to getting AI Act compliant. Therefor A sustainable portfolio management is the basis to get AI Act-ready and build AI governance structures. 

Download here your checklist with 10 points you should consider when introducing AI governance and preparing for the AI Act. 

Ideation AI Assisstent
Use Case inventory library

Build and maintain a portfolio of AI and ML use cases

Feature/Casebase/Custom Flow
Governance along your AI & ML life cycle

Definition of quality gates and requirements for design and operations based on compliance obligations and trustworthy AI principles.

Feature/casebase/Members

Management of responsibilities

Clarify responsibilities and make them transparent.

Feature/Casebase/Notification

Comprehensive risk management

Identify and mitigate risk potentials systematically and always keep the overview.

Feature/Casebase/Historization

Auditable reporting and traceability

Make sure that information about your AI initiatives is quick and easy to find as well as understandable for stakeholders.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Dive deeper into the topic. In the webinar, with our dear [at] colleagues, you will find more insights on the AI Act, how to implement it in practice, and how Casebase can support you.

How you get prepared

What businesses should do

As the AI Act shapes the legal framework to foster trust, the challenge now shifts to how organizations can effectively implement and manage these changes. The AI use case is moving to the center of the risk assessment and case management will be the key to getting AI Act compliant. Therefor A sustainable portfolio management is the basis to get AI Act-ready and build AI governance structures. 

Ideation AI Assisstent
Use Case inventory library

Build and maintain a portfolio of AI and ML use cases

Feature/Casebase/Custom Flow
Governance along your AI & ML life cycle

Definition of quality gates and requirements for design and operations based on compliance obligations and trustworthy AI principles.

Feature/casebase/Members

Management of responsibilities

Clarify responsibilities and make them transparent.

Feature/Casebase/Notification

Comprehensive risk management

Identify and mitigate risk potentials systematically and always keep the overview.

Feature/Casebase/Historization

Auditable reporting and traceability

Make sure that information about your AI initiatives is quick and easy to find as well as understandable for stakeholders.

Download here your checklist with 10 points you should consider when introducing AI governance and preparing for the AI Act. 

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Dive deeper into the topic. In the webinar, with our dear [at] colleagues, you will find more insights on the AI Act, how to implement it in practice, and how Casebase can support you.

How Casebase Supports YOU

Understanding the enormity of this task and its significance in the future of an AI lifecycle, we’re here to guide you through this journey with Casebase. In addition to a systematically documented library of your portfolio inventory, or the central AI & ML Lifecycle Management, you can look forward to further AI Act special features.

Feature/Casebase/AIAct

AI Act Risk Check

Get ready for the new regulation. Identify and classify your portfolio by risk groups.

Feature/Casebase/EU AI Act Assistent

EU AI ACT Assistent

This smart chat bot supports you with all questions about the EU AI Act.

Feature/Casebase/Qulaity Checklist

Quality Gate Checklist

Ensure requirements and compliance standards are met to drive high-quality use cases.

–> See further Casebase features.

Get our free Casebase trail

Free Trial, fast Onboarding

Casebase Tool Training

Training & support included

Case is a secure & GDPR complinat

Secure & GDPR compliant

Find out how the EU AI Act will impact your AI PORTFOLIO.