accessibilityalertarrow-downarrow-leftarrow-rightarrow-upchevron-downchevron-leftchevron-rightchevron-upclosedigital-transformationdiversitydownloaddrivedropboxeventsexitexpandfacebookguideinstagramjob-pontingslanguage-selectorlanguagelinkedinlocationmailmenuminuspencilphonephotoplayplussearchsharesoundshottransactionstwitteruploadwebinarwp-searchwt-arrowyoutube
Podcasts Podcasts

KI & Arbeitsrecht – was müssen Arbeitgeber:innen beachten?

In dieser Folge unseres Arbeitsrecht-Podcasts diskutieren Isabel Firneis und Sarah Enzi, was Arbeitgeber:innen beim Einsatz von KI-Systemen im HR Bereich berücksichtigen müssen. Unter welchen arbeitsrechtlichen Rahmenbedingungen ist der Einsatz von KI zulässig? Welche Rechte stehen dem Betriebsrat zu? Und was sollten Arbeitgeber:innen beachten, wenn ihre Mitarbeiter:innen eigenmächtig KI am Arbeitsplatz verwenden? Sollte eine KI-Richtlinie eingeführt werden?

Der Einsatz von KI im Personalmanagement nimmt auch in Österreich stetig zu. Bereits jetzt kommen KI-Systeme in nahezu allen Bereichen des Employee-Life Cycles zum Einsatz – vom Recruiting, über das Onboarding und das laufende Monitoring bis hin zu Entscheidungen über Beförderung oder Beendigung. Doch ist dies arbeitsrechtlich überhaupt zulässig?

Haben Sie Fragen zu dieser Episode oder generell zu unserem Arbeitsrecht-Podcast? Wir laden Sie herzlich ein, uns unter arbeitsrecht@wolftheiss.com zu kontaktieren.


AI for employers

Episode Summary

In this episode of our Arbeitsrecht podcast, Isabel Firneis and Sarah Enzi discuss what employers must consider when using AI systems for HR purposes. What are the legal regulations that apply? What rights does the works counsel have? And what should employers take into account/be aware of when their employees are in fact using AI at work? Should an AI policy be implemented?

The use of AI in HR management is steadily increasing in Austria. AI systems are already being used in almost every area of the employee life cycle from recruiting, onboarding and ongoing monitoring, to decisions on promotion or termination. But is this legally permitted? 

AI Act and fundamental rights

The AI Act, which came into force on 01 August 2024, is the world’s first comprehensive legal framework for the use of AI systems. However, the new regulations will only be applied in stages: While most parts of the AI Act will only become applicable in 12 to 36 months, regulations such as those related to banning AI systems that pose an unacceptable risk, will apply as of February 2025.

The new rules establish obligations for providers and deployers depending on the level of risk from the AI system. These rules will also be applicable to employers that use AI tools in their business. AI tools used in a human resource context will often qualify as high-risk AI systems, meaning that significant regulatory obligations will apply.

AI specific regulations on the national level

What is the legal status until all provisions of the AI Act are fully applicable? So far, there are no AI-specific regulations included in Austrian employment law. However, this does not mean that there are no (employment) regulations that restrict and regulate the use of AI. On the contrary, anti-discrimination laws and the data protection law in particular, as well as the rights of the works council, set several boundaries for the use of AI in Austria.

Works council rights

Under Austrian law, works councils have numerous information and monitoring rights, as well as rights related to co-determination. The introduction and use of most AI systems for HR purposes will require the prior conclusion of a works council agreement and therefore the consent of the works council. This can pose an obstacle for employers. Even if the consent from the works council can often be replaced by the consent of a conciliation commission, such proceedings take time and can therefore become a competitive disadvantage for the employer.

Anti-discrimination

The Austrian Equal Treatment Act and the Disability Employment Act prohibit direct or indirect discrimination based on gender, ethical affiliation, religion, ideology, age, sexual orientation, disability, etc. The algorithmic unconscious bias of an AI system making HR decisions may result in a discrimination liability for employers, entailing potential compensation for damages, reinstatements and back pay. In addition, the burden of proof of non-discrimination is shifted to the employer, who may have difficulties explaining how the AI made its decision.

Data protection

The GDPR is fully applicable to AI systems, meaning that all principles of data processing apply (e.g. there must be a lawful reason for the processing of personal data, etc.). Further, the rights of data subjects, such as the right to data erasure, also fully applies. How these rights can be ensured, however, remains unclear given that, for example,  the correction of false data or data erasure may not be possible in cases where a generative AI system is involved. Another important restriction is Art. 22 GDPR, according to which automated decision-making, including profiling, without any human oversight is prohibited if the decision produces legal or similar effects concerning an individual person.

Use of AI by employees

Besides the legal restrictions and risks that apply to them, employers should also be aware of the legal risks that exist in cases where AI systems such as ChatGPT. are used by their employees to perform their work.  The establishment of clear internal rules for the use of AI systems by employees in the performance of their tasks will help to minimise said liability risks. Such a policy could, for example, include instruction on which AI tools can be used and which versions (e.g. only the premium version), as well as the obligation to label all work products that have been created by using AI.

Practical steps

In order to ensure compliant use of AI systems within an organisation, whether for HR or general business purposes, companies should identify those AI systems that are or will be in use within the company. Compliance with the existing legal framework should also be verified, along with the introduction of regular checks of AI tools and adjustments to internal policies and procedures as required.

Contributors