Legal aspects of using AI applications in business

The integration of artificial intelligence (AI) into business processes is gaining momentum across all industries, and the number and quality of applications available on the market for a wide variety of uses are increasing just as rapidly. The use of AI systems offers companies enormous opportunities, particularly in the areas of automation, efficiency improvements, and innovation. However, the introduction of AI technologies is not only a technical and organizational challenge, but also anything but trivial from a legal perspective. To prevent corresponding risks and avoid severe penalties, the relevant legal issues should be given sufficient consideration as early as the conception phase and the selection of the appropriate AI applications, as well as during their implementation and, most importantly, during their use. The following article is intended to serve as an initial guide for companies using AI systems to get started with AI compliance and to provide an overview of the essential legal aspects of using AI applications in business.

1. AI and Data Protection

One of the most important legal aspects of using AI applications is the protection of data that qualifies as so-called input data. Input data refers to data that is fed into an AI system or directly collected by it, and on the basis of which the system produces an output (as also defined in Article 3 No. 33 of the AI Regulation [Regulation (EU) 2024/1689]).

Of particular relevance in this context are, firstly, personal data, i.e., information relating to an identified or identifiable natural person (as defined in Article 4 No. 1 of the General Data Protection Regulation (GDPR, Regulation (EU) 2016/679)), and secondly, data that constitutes trade secrets. In this context, in addition to the user's own trade secrets, particular attention must also be paid to the trade secrets of third parties.

(1) Personal data

Even the question of whether the data to be used is personal data is often not easy to determine, especially with larger datasets, and in many cases only becomes clear based on the structure of the datasets themselves. Therefore, the selection and definition of the input data must be carefully determined and analyzed with regard to the inclusion of personal data as early as the planning stage of AI applications.

Whenever personal data is used or processed, companies must ensure compliance with the relevant data protection regulations. In addition to any specific legal provisions that may apply depending on the processing purpose, these are primarily the provisions of the GDPR. The following requirements, in particular, must be observed:

  • Legal basis for data processingPersonal data may only be processed if there is a sufficient legal basis for doing so. This means that the processing or use of the data requires either a legally regulated permission (in particular, the grounds of Article 6(1) GDPR are relevant here) or the explicit consent of the data subject.
  • Data minimization and purpose limitationFurthermore, it should be noted that only data that is absolutely necessary for achieving the purpose covered by the respective legal basis may be used for processing (principle of data minimization). In addition, the data may only be processed for the specific purpose covered by the legal basis and not for any other purpose.
  • Data securityCompanies must ensure that organizational and, above all, technical measures are implemented when processing personal data to guarantee an appropriate level of protection in relation to the risks associated with the processing. Particular attention must be paid to the transfer of personal data to third parties (providers and vendors of corresponding IT systems), especially when the transfer takes place in countries outside the European Economic Area (EEA). This is because European legislators generally assume that these countries (so-called third countries) do not offer a sufficient level of data protection, which is why separate measures must be taken to guarantee security in this respect.
  • Documentation, transparency and information obligationsIf personal data is processed, this must be documented in accordance with the specific requirements of the GDPR. Furthermore, the data subjects whose personal data is being processed must be informed about the processing in a transparent manner beforehand (e.g., through privacy policies and other notices).

(2) Other data / trade secrets

Beyond personal data, it is essential to ensure that any data used as input is confidential. This applies both to the user's own trade secrets and, in particular, to confidential third-party data, which is protected as trade secrets under both legal provisions and any non-disclosure agreements (NDAs) and may only be used to a limited extent or not at all in connection with the corresponding AI application.

In this respect as well, the selection and tailoring of the input data must be carefully analyzed within the framework of the conception of the use of the respective AI applications.

2. Requirements of the AI Regulation

The European Regulation on Artificial Intelligence (Regulation (EU) 2024/1689), or AI Regulation for short, establishes for the first time a uniform legal framework for the development, marketing and use of AI systems in the European Union.

The AI Regulation follows a so-called risk-based approach, classifying AI systems into four different categories based on the risks associated with their use. Depending on the system's classification, different obligations are stipulated, particularly for providers and users of the systems (referred to as "operators" in the AI Regulation).

The following categories can be distinguished with regard to risk classification:

(1) Systems that represent prohibited practices

AI applications that constitute prohibited practices are regulated in Article 5 of the AI Regulation. This primarily covers applications targeting public users rather than private companies, such as AI systems for so-called "social scoring" or for predicting crimes based on profiling. These may neither be placed on the market nor used.

(2) High-risk AI systems (strictly regulated)

So-called high-risk AI systems are strictly regulated and defined in Article 6 and Annex III of the AI Regulation. These include, for example, AI systems intended for use as security components in the management and operation of critical digital infrastructure, road traffic, or water, gas, heat, or electricity supply. Biometric remote identification systems and AI systems used for decision support in employment matters are also classified as high-risk AI systems, among others.

The obligations affecting operators of such systems are regulated in Article 26 of the AI Regulation and include in particular:

  • Technical and organizational measures: Implementing appropriate technical and organizational measures (TOMs) to ensure that the system is used in accordance with the provider's specifications.
  • Human supervision: Ensuring that the system is supervised by trained and competent natural persons who are able to monitor and correct the system's decisions.
  • Monitoring of operations: Ongoing monitoring of system performance in accordance with the provider's specifications and, if necessary, the obligation to inform the provider of any anomalies.
  • Logging: Automatically generated logs must be kept for a reasonable period of time (at least 6 months).
  • Reporting obligation: If the operator becomes aware of serious incidents or malfunctions of the system, they must immediately inform the provider and the relevant authorities.
  • Checking the input data: Insofar as the operator has control over the input data, he must ensure its relevance and representativeness in order to minimize erroneous results.

(3) Limited-risk systems

For AI systems with limited risk, in particular AI systems that interact directly with humans or synthetically generate audio, image, video, or text content, specific transparency obligations apply (e.g., indicating the use of AI systems in chatbots, labeling synthetic or deepfake content, etc.). These obligations are set out in Article 50 of the AI Regulation.

(4) Minimal risk (no or voluntary requirements)

Low-risk AI systems constitute the majority of AI systems used today (these include, for example, translation tools, research, sorting and filtering systems, text and communication tools, etc.) and are not subject to any specific requirements under the AI Regulation. However, operators are encouraged to adopt voluntary codes of conduct and best practices to ensure quality and transparency (see also Article 95 of the AI Regulation).

3. AI and Intellectual Property

Intellectual property aspects, particularly copyright, are also of crucial importance when using AI systems. Corresponding questions arise at various points and in different contexts.

(1) AI-generated content and intellectual property

Intellectual property issues arise, firstly, in relation to the use of content generated for the user by AI systems. This content may infringe the intellectual property rights of third parties, particularly copyrights, if it contains protected materials such as texts or images.

Therefore, when using generative AI systems, appropriate mechanisms for content control of the generated output should be considered as early as the design phase of the corresponding business processes.

Conversely, it should be noted that users generally do not establish their own copyright protection for content they generate using AI systems. In particular, content created by AI systems—with the exception of issues related to pre-existing third-party rights due to infringing input and training data—is generally not protected by copyright, as it typically does not constitute a personal intellectual creation. By definition, such a creation requires that a natural person participates in the creative process with a corresponding original creative contribution. The lack of copyright protection for the generated content has particular implications insofar as it is not possible to prohibit third parties from using this content, at least not on a copyright basis.

(2) Protected materials as input data

When selecting input data for use in AI systems, it should also be considered that using materials as input data can infringe on the rights of third parties, particularly copyrights. Depending on the type of use and the material employed, the origin of the material and the legal situation regarding third-party rights should be checked beforehand.

In this respect, it is also advisable to regulate the type and structure of the content permitted as input data in a binding manner within the company (e.g., through appropriate guidelines) and to monitor compliance with the requirements within the framework of defined processes.

4. Labor law issues

Depending on the type of AI application and the corresponding area of use, there may be various labor law implications when using AI systems in companies.

In addition to issues of employee data protection – such as questions regarding the lawfulness of processing employee data in AI-supported processing operations or regarding the admissibility of AI-supported automated decisions in the employment context or the requirement of a data protection impact assessment for certain intensive AI-supported processing operations – aspects of employee participation must also be taken into account from an employment law perspective in connection with the implementation of AI systems.

(1) Rights to information and consultation

Before introducing AI systems into a company, the works council's rights to information and consultation must first be observed. In particular, the specific right to information under Section 90 Paragraph 1 No. 3 of the Works Constitution Act (BetrVG) is relevant, according to which the employer must inform the works council in a timely manner about the planning of work processes and procedures, including the use of artificial intelligence, and provide the necessary documentation. Section 90 Paragraph 2 of the Works Constitution Act (BetrVG) additionally provides for a right to consultation. Accordingly, the employer must consult with the works council about the planned measures and their impact on employees, especially on the nature of their work and the resulting demands placed on employees, in such a timely manner that the works council's suggestions and concerns can be taken into account during the planning process.

(2) Co-determination rights

Furthermore, the co-determination rights of the works council pursuant to Section 87 Paragraph 1 Nos. 6 and 7 of the Works Constitution Act are particularly relevant.

Section 87, paragraph 1, number 6 of the German Works Constitution Act (BetrVG) provides for a right of co-determination in cases where an AI system is used to monitor the behavior or performance of employees. It is sufficient if the AI system in question is objectively capable of monitoring employee behavior or performance. An intention on the part of the employer to monitor employees is not required.

Furthermore, according to Section 87 Paragraph 1 No. 7 of the German Works Constitution Act (BetrVG), a right of co-determination also exists if AI systems pose risks to the physical or mental health of employees. However, this is unlikely to be the case when AI systems are used in the form of generative AI merely as an aid for employees.

In rarer cases, co-determination pursuant to Section 87 Paragraph 1 No. 1 of the Works Constitution Act (BetrVG) could also be considered if AI systems have a significant influence on the organization of the company or the behavior of employees within the company.

Finally, further co-determination rights of the works council may arise from Sections 94 and 95 of the Works Constitution Act (BetrVG) if AI systems are used in the context of the company's recruiting measures.

5. Liability and responsibility when using AI systems

Another important aspect related to the use of AI systems is the aspect of liability for damages that occur to third parties as a (causal) result of the use of AI systems.

From the perspective of users and operators of AI systems, which is the decisive factor here, those constellations are particularly relevant where the user of the AI systems employs them in the delivery of services to customers. The conceivable application scenarios are manifold: from use as a tool in product and service development, to use in logistics, financial services and payment processing, marketing, IT services, customer communication, or medical services (to name just a few). Many applications are conceivable.

If customer damages are attributable to errors within the AI system itself, the user of the AI system will primarily have to deal with the customer's liability claims. These include, on the one hand, contractual claims arising from the user's contractual relationship with their customers, and on the other hand, statutory claims for damages, particularly tort claims. Conversely, in these cases, the user faces questions regarding potential recourse against the dealer and/or provider of the defective AI system, as well as questions about possible insurance coverage for such damages.

From a user's perspective, it is therefore advisable to place particular emphasis on appropriate contractual provisions regarding liability and recourse as early as the procurement stage of an AI system. The same applies, conversely, to contracts with customers when the corresponding AI systems are integrated into the provision of services to the customer. In particular, the liability provisions should be consistent with the recourse options against the AI system provider, and corresponding damages should be considered when purchasing insurance coverage.

Furthermore, it is important to minimize the company's risks in the application of AI systems by implementing appropriate internal guidelines and policies for their use.

6. Compliance Management

To ensure compliance with company-specific legal requirements in connection with the use of AI systems, relevant processes should be considered and integrated into the company's internal compliance management from the outset.

This includes measures for all three phases of relevant AI projects: (1) conception of AI-supported business processes and selection of AI systems, (2) implementation of the AI systems and (3) use of the AI systems.

The establishment of appropriate compliance measures should include – in addition to defining and documenting the processes – the introduction of corresponding internal guidelines, in particular binding employee policies and mechanisms for monitoring compliance. Further measures include topic- and application-specific employee training (unless legally required, for example, for high-risk AI systems), which should be refreshed and expanded at regular intervals.

Conclusion

The use of artificial intelligence in business offers great potential, but this – depending on the area of application and the type of AI system – is accompanied by sometimes complex legal requirements. However, these challenges and risks can be effectively managed if the appropriate compliance measures are taken in a timely manner.