The integration of artificial intelligence (AI) into business processes continues to gain momentum across all industries, and the number and quality of applications available on the market for a wide variety of use cases are growing just as rapidly. The use of AI systems offers companies enormous opportunities, particularly in the areas of automation, efficiency gains, and innovation. However, the introduction of AI technologies is not only a technical and organizational challenge but also, from a legal perspective, anything but trivial. To prevent corresponding risks and avoid severe sanctions, the relevant legal issues should therefore be given sufficient consideration as early as the conceptualization of the application and the selection of the appropriate AI applications, as well as during their implementation and, even more so, in the context of the use of AI applications. The following article is intended to serve as an initial guide for companies using AI systems to navigate AI compliance, while also providing an overview of the key legal aspects of deploying AI applications in business.
1. AI and Data Protection
One of the most important legal aspects of using AI applications is the protection of data that qualifies as input data. Input data refers to data that is fed into an AI system or collected directly by it and on the basis of which the system produces an output (as defined in Article 3(33) of the AI Regulation [Regulation (EU) 2024/1689]).
Of particular relevance in this context are, on the one hand, personal data—i.e., information relating to an identified or identifiable natural person (as defined in Article 4(1) of the General Data Protection Regulation (GDPR, Regulation (EU) 2016/679))—and, on the other hand, data constituting trade secrets. In this regard, in addition to the user's own trade secrets, particular attention must also be paid to the trade secrets of third parties.
(1) Personal data
The very question of whether the data to be used is personal is often not easy to determine, especially with larger datasets, and in many cases only becomes apparent based on the structure of the relevant datasets. Therefore, the selection and tailoring of input data must be carefully determined and analyzed with regard to the inclusion of personal data as early as the design phase of AI applications.
If personal data is used or processed, companies must ensure that the relevant data protection regulations are complied with. In addition to any specific legal provisions that may apply based on the respective purpose of processing, these are primarily the provisions of the GDPR. In particular, the following requirements must be observed:
- Legal basis for data processing: Personal data may only be processed if there is a sufficient legal basis for doing so. This means that the relevant processing or use of the data requires either a legally prescribed authorization (in particular, the grounds set forth in Article 6(1) of the GDPR) or the explicit consent of the data subject.
- Data minimization and purpose limitation: It must also be noted that only those data that are strictly necessary to achieve the purpose covered by the respective legal basis may be used for processing (principle of data minimization). Furthermore, the data may only be processed for the specific purpose covered by the legal basis and for no other purpose.
- Data security: Companies must ensure that, when processing personal data, organizational and, above all, technical measures have been implemented that guarantee an appropriate level of protection in relation to the risks associated with the processing. Particular attention must be paid to the transfer of personal data to third parties (providers and suppliers of relevant IT systems), especially when the transfer takes place to countries outside the European Economic Area (EEA). This is because European lawmakers generally assume that these countries (so-called third countries) do not provide an adequate level of data protection, which is why separate measures must be taken to ensure security in this regard.
- Documentation, transparency and information obligations: If personal data is processed, this must be documented in accordance with the specific requirements of the GDPR. In addition, the data subjects whose personal data is being processed must be informed of the processing in advance in a transparent manner (for example, through privacy policies and other notices).
(2) Other Data and Trade Secrets
Beyond personal data, care must be taken to ensure that any confidentiality of the data is maintained when using data as input. This applies, on the one hand, to the AI system user's own trade secrets, but particularly to confidential third-party data that is protected as trade secrets both by law and under any confidentiality agreements (NDAs) and may only be used to a limited extent—or not at all—in connection with the relevant AI application.
In this regard as well, the selection and tailoring of the input data must be carefully analyzed as part of the design of the deployment of the respective AI applications.
2. Requirements of the AI Regulation
The European Regulation on Artificial Intelligence (Regulation (EU) 2024/1689), or AI Regulation for short, establishes for the first time a uniform legal framework for the development, placing on the market, and use of AI systems in the European Union.
The AI Regulation adopts a so-called risk-based approach by classifying AI systems into four different categories based on the risk associated with their use. Depending on the system's classification, different obligations are established, particularly for providers and users of the systems (referred to as "operators" in the AI Regulation).
The following categories are distinguished based on risk classification:
(1) Systems that represent prohibited practices
AI applications that constitute prohibited practices are regulated in Article 5 of the AI Regulation. This primarily covers applications directed at public-sector users rather than private companies, such as AI systems for so-called "social scoring" or for predicting criminal offenses based on profiling. These may neither be placed on the market nor used.
(2) High-risk AI systems (strictly regulated)
So-called high-risk AI systems are strictly regulated and defined in Article 6 and Annex III of the AI Regulation. These include, for example, AI systems that are intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or water, gas, heating, or electricity supply. Biometric remote identification systems and AI systems used for decision support in employment relationships are also classified as high-risk AI systems, among others.
The obligations applicable to operators of such systems are set forth in Article 26 of the AI Regulation and include, in particular:
- Technical and organizational measures: Implementing appropriate technical and organizational measures (TOMs) to ensure the system is used in accordance with the provider's specifications.
- Human supervision: : Ensuring that the system is supervised by trained and competent individuals who are capable of monitoring and correcting the system's decisions.
- Monitoring of operations: Ongoing monitoring of system performance in accordance with the provider's specifications and, where applicable, the obligation to notify the provider in the event of anomalies.
- Logging: Automatically generated logs must be kept for a reasonable period of time (at least 6 months).
- Reporting obligation: Upon becoming aware of serious incidents or system malfunctions, the operator must immediately inform the provider and the relevant authorities.
- Checking the input data: To the extent that the operator has control over the input data, they must ensure its relevance and representativeness to minimize erroneous results.
(3) Systems with limited risk
Specific transparency obligations apply to AI systems with limited risk, particularly AI systems that interact directly with humans or synthetically generate audio, image, video, or text content (e.g., disclosure of the AI system in the context of chatbots, labeling of synthetic or deepfake content, etc.). These obligations are regulated in Article 50 of the AI Regulation.
(4) Minimal risk (no or voluntary requirements)
Low-risk AI systems constitute the majority of AI systems in use today (these include, for example, translation tools, research, sorting, and filtering systems, text and communication tools, etc.) and are not subject to any specific requirements under the AI Regulation. However, voluntary codes of conduct and best practices are recommended to operators to ensure quality and transparency (see also Art. 95 of the AI Regulation).
3. AI and Intellectual Property
Intellectual property issues, particularly copyright, are also of significant importance in the use of AI systems. Related questions arise in various places and contexts.
(1) AI-generated content and intellectual property
Intellectual property issues arise, on the one hand, in relation to the use of content generated by AI systems for the user. Such content may infringe on third-party intellectual property rights, particularly copyrights, if it contains protected materials, such as texts or images.
Therefore, when using generative AI systems, appropriate mechanisms for content control of the generated output should be considered as early as the design phase of the relevant business processes.
Conversely, it should be noted that the user generally does not establish their own copyright protection for content generated by them using AI systems. In particular, content created by AI systems—excluding issues regarding pre-existing third-party rights arising from infringing input and training data—is generally not protected by copyright, as it typically does not constitute a personal intellectual creation. By definition, such a creation requires that a natural person participate in the corresponding creative process with their own creative contribution. The lack of copyright protection for the generated content has particular implications in that third parties cannot be prohibited from using this content, at least on copyright grounds.
(2) Protected Materials as Input Data
When selecting input data in the context of using AI systems, it should also be taken into account that the use of materials as input data may also affect the rights of third parties, in particular copyrights. Depending on the nature of the use and the material used, the origin of the material and the legal situation regarding third-party rights should be examined in advance.
In this regard as well, it is advisable to establish binding internal company rules regarding the type and structure of content permitted as input data (e.g., through appropriate guidelines) and to monitor compliance with these requirements within the framework of defined processes.
4. Employment Law Issues
Depending on the type of AI application and the corresponding area of use, various employment law implications may arise from the use of AI systems in companies.
In addition to issues of employee data protection—such as questions regarding the lawfulness of processing employee data in AI-supported processing procedures, the permissibility of AI-supported automated decisions in an employment context, or the requirement for a data protection impact assessment for certain intensive AI-supported processing procedures—aspects of employee participation in the workplace must also be taken into account from an employment law perspective in connection with the implementation of AI systems.
(1) Rights to Information and Consultation
Prior to the introduction of AI systems in the company, the works council's rights to information and consultation must first be respected. Of particular note here is the specific right to information under Section 90(1)(3) of the Works Constitution Act (BetrVG), according to which the employer must inform the works council in a timely manner about the planning of work procedures and workflows, including the use of artificial intelligence, and provide the necessary documentation. Section 90(2) of the Works Constitution Act (BetrVG) additionally provides for a right to consultation. Accordingly, the employer must consult with the works council regarding the planned measures and their effects on employees—in particular on the nature of their work and the resulting demands placed on employees—in a timely manner so that the works council's suggestions and concerns can be taken into account during the planning process.
(2) Co-determination rights
Furthermore, the works council's rights of co-determination pursuant to Section 87(1)(6) and (7) of the Works Constitution Act (BetrVG) are particularly relevant.
Section 87(1)(6) of the BetrVG provides for a right of co-determination in cases where an AI system is used to monitor the behavior or performance of employees. In this regard, it is sufficient if the relevant AI system is objectively suitable for monitoring the behavior or performance of employees. An intention on the part of the employer to monitor is not required in this respect.
Furthermore, pursuant to Section 87(1)(7) of the Works Constitution Act (BetrVG), a right to co-determination also applies if AI systems pose risks to the physical or mental health of employees. However, when AI systems in the form of generative AI are used solely as tools to assist employees, this is generally not the case.
In rarer cases, co-determination under Section 87(1)(1) of the BetrVG might also apply if AI systems have a significant influence on the organization of the workplace or the behavior of employees at the workplace.
Finally, further co-determination rights of the works council may arise from Sections 94 and 95 of the Works Constitution Act (BetrVG) if AI systems are used as part of the company's recruitment measures.
5. Liability and Responsibility in the Use of AI Systems
Another important aspect in connection with the use of AI systems is the issue of liability for damages incurred by third parties as a direct causal result of the use of AI systems.
From the perspective of users or operators of AI systems—which is relevant here—situations are particularly significant where the user deploys AI systems in the course of providing services to customers. The conceivable use cases are diverse: from use as a tool in product and service development, to use in logistics, within the context of financial services and payment processing, in marketing, within the context of IT services, in customer communication, or medical services (to name just a few), many scenarios are conceivable here.
If the customer's damages are attributable to errors within the AI system itself, the user of the AI system will primarily have to deal with liability claims from the customer. Here, contractual claims arising from the user's contractual relationship with their customers come into play, as well as statutory claims for damages, particularly tort claims. On the other hand, in such cases, the user faces questions regarding potential recourse against the distributor and/or provider of the defective AI system, as well as questions regarding potential coverage of such damages by appropriate insurance policies.
From the user's perspective, it is therefore advisable to pay particular attention to appropriate contractual provisions regarding liability and recourse already during the procurement of an AI system. The same applies, conversely, to contracts with customers if the relevant AI systems are integrated into the provision of services to the customer. In particular, the liability provisions should be consistent with the recourse options against the provider of the AI systems, and such damages should be taken into account when purchasing insurance coverage.
Furthermore, it is important to minimize the company's risks associated with the use of AI systems through appropriate internal corporate policies and guidelines for the deployment of AI systems.
6. Compliance Management
To ensure compliance with company-specific legal requirements related to the use of AI systems, appropriate processes should be considered and integrated from the outset within the framework of the company's internal compliance management.
This includes measures for all three phases of relevant AI projects: (1) design of AI-supported business processes and selection of AI systems, (2) implementation of AI systems, and (3) use of AI systems.
The establishment of appropriate compliance measures should—in addition to defining and documenting the processes—also include the introduction of corresponding internal guidelines, in particular binding employee guidelines and mechanisms for monitoring compliance with these guidelines. Further measures include topic- and application-specific employee training, which should be refreshed and expanded at regular intervals—unless legally mandated, as in the case of high-risk AI systems.
Conclusion
The use of artificial intelligence in companies offers great potential, but—depending on the area of application and the type of AI system—this comes with legal requirements that may, in some cases, be considerable in scope. However, if the appropriate compliance measures are taken in a timely manner, these challenges and risks can be effectively managed.