Welcome to the Workplace Blog!

In this Blog we write about topics from the Workplace Law and HR world: We discuss important court decisions and planned legislations, give practical tips and share with you experiences from our daily working life…

The team appreciates your comments and feedback. We are looking forward to a lively exchange!

Your PWWL editorial team

Christine Wahlig
Attorney at law
Editorial Management

Alice Tanke
Marketing Manager

Inside Workplace Law

The AI Regulation – Is it still possible for me to wait as an employer?

MW_PWWL_einzeln_Z_72

Introduction

The term “Artificial Intelligence” (AI) may seem new to many, but it was actually first used in the 1950s. However, AI only came to the attention of the general public with the publication of ChatGPT at the end of 2022. Since then, not a day has gone by without reports about AI and the rapid changes it will bring to the working world. It remains to be seen whether the hopes of some and the fears of others associated with the spread of AI will actually become reality. It is not too far-fetched to predict that AI will increasingly become part of our everyday lives. This will not stop at the working world either.

The legislator has also reacted. The AI Regulation (Regulation 2024/1689) came into force on 1August 2024. However, it will not take full effect until 2 August 2026.

Until then, there still seems to be plenty of time. But as an employer do you already know the obligations that you will face when you use AI? Even if 2026 still seems far away, it is time to take a closer look now.

What is AI?

The definition of Artificial Intelligence in the AI Regulation is relevant to the question of what obligations an employer who uses AI is subject to. Art. 3 No. 1 of the AI Regulation states:

„…a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments “.

The definition is intentionally kept broad to encompass as many use cases of AI as possible.

This puts employers in the challenging position of having to assess whether a particular system qualifies as AI under the Regulation. This applies both to the introduction of new AI systems and to existing systems.

The definition contains several undefined legal terms:

“machine-based”

“for explicit or implicit objectives”

“influence physical or virtual environments”

“adaptiveness”

“with varying levels of autonomy”

“infers, from the input it receives”

The Commission has the task of developing “guidelines on the practical implementation of this Regulation” (Art. 96 of the AI Regulation). These guidelines are to relate, among other things, to “the application of the definition of an AI system”.

Unfortunately, the AI Regulation does not specify a deadline by which the guideline must be developed. Until then, the legal practitioners (employers) will have to deal with the difficulties in interpreting the term AI.

Additionally, there are AI models that form the core of an AI system. An AI model can be used in multiple AI systems (e.g. GPS in ChatGPT or in Copilot).

What AI systems and AI models does the AI Regulation distinguish?

The AI Regulation follows a risk-based approach which we are already familiar with from the GDPR. Depending on the classification of an AI system, the requirements to be adhered to during its operation are then determined.

Which risk categories are included in the AI Regulation?

Prohibited AI practices (Chapter II – Art. 5 of the AI Regulation)

High-risk AI systems (Chapter III – Art. 6 – 49 of the AI Regulation)

AI systems with limited risk (Chapter IV – Art. 50 of the AI Regulation)

General-purpose AI models with/without systemic risk (Chapter V – Art. 51 -55 of the AI Regulation)

Outside of this classification, there are AI systems with low risk. For these, Art. 95 of the AI Regulation provides only a voluntary application of the requirements that apply to high-risk AI systems.

Practical advice

The classification of a system can be difficult. This makes it all the more important for employers to start addressing existing systems now.

Prohibited practices

Starting from 2 February 2025, the use of prohibited practices will be banned. Chapter II of the AI Regulation, which deals with the regulation of prohibited practices, will therefore take precedence over all other provisions concerning AI systems.

Art 5 (1) of the AI Regulation contains a catalogue of prohibited practices. In particular, the practices in letter a and f might be relevant in working life.

High-risk AI systems

Annex III provides an exemplary list of AI systems for classification as high-risk AI systems. The Commission has the right to amend this Annex. This is to ensure that, even with rapid technological developments, the AI Regulation continues to govern the entire scope of AI applications.

For employers, the systems listed in Annex III, no. 4 titled “Employment, workers’ management and access to self-employment” are particularly relevant:

AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;

AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

What obligations do employers have?

The obligations that employers face in relation to a high-risk AI system depend on whether they develop an AI system in-house or use an external AI solution. The definition of the various actors can be found in paragraphs 3 to 7 of the AI Regulation. Since employers will most likely use external AI solutions, they are considered the “deployer” of an AI system (compare with Art. 3 no. 4 of the AI Regulation).

The advantage of this is that deployers are subject to significantly fewer obligations than providers.

But what obligations do employers have when deploying a high-risk AI system?

The answer lies in Art. 26 of the AI Regulation.

Art. 26 (1) of the AI Regulation

Deployers shall take appropriate technical and organizational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to (3) and (6). This makes the requirements set by the providers of AI systems crucially important.

Art. 26 (2) of the AI Regulation

High-risk AI systems can only be used under the oversight of natural persons. Employers must assign this oversight to natural persons who have the necessary competence, training and authority and give them the necessary support. These individuals do not necessarily have to be employees; external service providers can also fulfil this role.

The providers must already take this obligation into account when developing high-risk AI systems. They must be designed and developed in such a way that they can be effectively overseen (Art. 14 (4) of the AI Regulation).

The aim of human oversight is to “prevent or minimise the risk to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse” (compare with Art. 14 (2) of the AI Regulation).

Art. 4 of the AI Regulation also requires the deployers of AI systems (regardless of their classification) to take measures to ensure that the employees involved in the deployment or use of AI systems possess a sufficient level of AI competences.

Art. 26 (4) of the AI Regulation

Deployers shall further ensure that input data under their control is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.

Art. 26 (5) of the AI Regulation

The deployment of a high-risk AI system must be monitored on the basis of the instructions for use. If certain conditions are met, there is an obligation to inform the provider or the surveillance authority. The use of the system may have to be suspended.  

Art. 26 (6) of the AI Regulation

The deployer must keep logs that are automatically generated by the system for at least six months.

Art. 26 (7) of the AI Regulation

Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives (works councils) and the affected workers that they will be subject to the use of the high-risk AI system. 

Art. 27 of the AI Regulation

Certain deployers listed in Art. 27 of the AI Regulation must perform an assessment of the impact on fundamental rights before using high-risk AI systems:

This includes the following deployers:

bodies governed by public law

private entities providing public services

deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III

Can a deployer become a provider?

Yes!

Art. 25 (1) of the AI regulation governs the cases in which a deployer is considered a provider of a high-risk AI system and is therefore subject to the provider obligations under Art. 16 of the AI Regulation.

This is the case when

the deployer puts their name or trademark on a high-risk AI system already placed on the market or put into service;

the deployer makes a substantial modification to a high-risk AI system in such a way that it remains a high-risk AI system;

the deployer modifies the AI system in such a way that the AI system concerned becomes a high-risk AI system.

Impending sanctions

As we already know from the GDPR, the AI Regulation also imposes significant sanctions for violations of the Regulation.

For example, the use of prohibited practices can be subject to fines of up to EUR 35 000 000 or up to 7% of the worldwide annual turnover (Art. 99 (3) of the AI Regulation). However, non-compliance with the deployer’s obligations under Art. 26 of the AI Regulation can also be subject to fines of up to EUR 15 000 000 or 3% of the worldwide annual turnover.

Conclusion

There is still enough time to prepare for the application of the AI Regulation. However, the employer should use the time

to check which systems used are AI within the meaning of the Regulation,

to train the employees in the use of AI,

to take into account the co-determination rights of the works council and

to initiate all further technical and organizational measures in good time.

Dr. Michael Witteler
Dr. Michael Witteler

Dr. Michael Witteler specializes in data protection law matters at the interface of employment law and data protection. He is Head of PWWL’s Data & Privacy Practice Group.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments