
Many employers are opting to use artificial intelligence (AI) in their company. The digitalization of work processes is progressing and in many areas the use of AI is required in order to remain competitive. Employers can counter the legal risks by making their employees aware of them. They should also create clear guidelines for the use of AI in the employment relationship. Such guidelines can address many legal risks.
Transparency
Employers should always oblige their employees to document in which work AI was used. This enables employers to check whether legal violations have occurred through the use of AI. The documentation obligation is also important for employers’ rights management. Only if it is clear in which work results AI has been used, employers are able to check to which work results they may not have rights.
Entering the prompt
In such guidelines, employees should also be made aware of what needs to be observed when entering prompts. An important requirement here is that neither personal data nor trade secrets may be entered in the prompt. It is helpful for employees to clarify which information may fall into these categories.
Furthermore, the guidelines on the use of AI should contain specifications regarding the wording of the prompt. The way a prompt is formulated influences the likelihood that the result of the AI will infringe the rights of third parties. In this case, too, it is important to make employees aware of the importance of correct wording.
Inspection obligations
It is also advisable to impose certain inspection obligations on employees which must always be observed when working with AI.
First, employees should be obliged not to adopt any AI results without first checking them for correctness. This is because most AI providers can still produce incorrect or inaccurate results. Even so-called hallucinations of the AI cannot be ruled out.
Second, employees should be obliged to check whether the AI results infringe the rights of third parties. Even if such an infringement of rights cannot usually be ruled out with complete certainty with reasonable effort, such inspection obligations minimize the risk of an infringement occurring. As the wave of lawsuits in the USA against AI providers shows (see the post by Dr. Ursula Feindor-Schmidt on this topic), there is a risk that authors could also take action against AI users in the event of infringements.
General guidelines on the use of AI
Guidelines on the use of AI offer employers the opportunity to define general terms of use for dealing with AI. For example, the employer can rule that only the use of certain AI tools or only the use in certain areas of activity is permitted. The private use of AI tools can also be excluded.
The guidelines can also stipulate requirements for the training of the AI tool if it is an internally developed AI tool. With such tools, it is very important that the AI is trained with correct data in order to minimize the risk of incorrect or infringing results from the AI.
Authority to issue guidelines on the use of AI
The employer can generally issue such guidelines based on its right to issue instructions in accordance with Section 106 GewO. As the employer can completely prohibit the use of AI on the basis of this right to issue instructions, it can even more so restrict the use of AI. However, the prerequisite for this is that the employment contract or the applicable collective agreement does not contain a provision in this regard. The employer could only deviate from this with the consent of the employees or by amending the collective agreement.
You will learn when and how the employer must involve the works council (“Betriebsrat”) when using AI in the company in our next article on AI in the employment relationship.