The European Parliament has passed the first AI Act: what are business requirements?

Barbashyn Law Team Sergiy Barbashyn Attorney, managing partner of Barbashyn Law Firm and and lawyer Anastasia Vladiyeva
4 June, 2024 6 min for reading
4 June, 2024 6 min for reading

In a world where artificial intelligence is rapidly becoming a part of our daily lives and a key element in the development of business and technology, regulating its use has become a critical task for lawyers and legislators. In May of this year, the EU Council approved the world’s first legislative act on artificial intelligence—the Artificial Intelligence Regulation, also known as the AI Act.

 

In an expert article for AIN.UA, managing partner of Barbashyn Law Firm, Sergiy Barbashyn, and lawyer Anastasia Vladiyeva discussed the main provisions of the regulation and provided practical recommendations for businesses on how to prepare for the enactment of this legislation.

The regulation aims to establish standards for developing, implementing, and using AI systems to ensure ethical, fair, and safe use of this technology.

Where and to whom the new law will apply

The adoption of the AI Act in the European Union has far-reaching implications that extend beyond the territorial boundaries of the EU. One of its key features is its extraterritorial effect. It will apply to all individuals or companies that put AI products into operation or place them on the EU market, regardless of where these companies are located. For example, if a company in the US develops an AI chatbot, but some of its users are in the EU, the company must comply with the regulation. This is why the act is sometimes referred to as GDPR 2.0.

It is also important to note that the AI Act includes provisions that apply not only to AI system developers but also to importers and distributors who provide access to such systems on the EU market.

Additionally, the act contains provisions aimed at users who utilize AI in their professional activities. For example, suppliers and users of AI systems must ensure that their personnel and anyone working with these systems have sufficient knowledge of AI.

What businesses should pay attention to

We’ve established who the act will apply to, so now, let’s look at the main innovations it will bring.

The AI Act defines new standards and requirements for AI systems depending on their risk level. Let’s examine individual provisions of the act and understand the specific changes and innovations.

1. Prohibited Systems

The act includes a list of AI practices that will be prohibited because they pose an unreasonable threat to fundamental human rights. For instance, using AI systems that intentionally manipulate or deceive to influence a person’s ability to make informed decisions significantly is prohibited. For example, suppose a company releases a mobile application that uses artificial intelligence to personalize advertisements, and this application employs subliminal AI techniques, causing users to spend excessive time or make unnecessary purchases. In that case, it can significantly impact their ability to make informed decisions, causing financial and emotional harm.

AI emotion recognition (EAI) systems are also prohibited in the workplace or educational institutions, except when these systems are used for medical or security purposes. For example, suppose a company uses an AI system that automatically analyzes facial expressions, voice tone, or even movements to determine employees’ psychological state. At the same time, it may help companies promptly respond and prevent adverse health and productivity consequences. In that case, it simultaneously invades personal privacy and violates employees’ privacy.

2. High-Risk systems

The act also outlines requirements for high-risk AI systems. These include AI systems used in critical areas: medicine, education, law enforcement, judiciary, and political activities.

The AI Act sets specific requirements for suppliers of these systems. For example, technical documentation must be prepared before a high-risk AI system is marketed or put into operation. Moreover, this documentation must be continuously updated according to the changes made in the system throughout its lifecycle.

The act also imposes obligations on AI system users who use them professionally. Before using a high-risk AI system in the workplace, employers must inform their employees.

3. Transparency in Usage

The act also includes specific transparency requirements, which are crucial for avoiding the negative consequences of AI use.

High-risk AI systems must be designed and developed so that their operation is understandable to users. This way, users can adequately understand the data they receive from the system and use it appropriately.

Furthermore, suppliers of AI systems intended for direct interaction with individuals must ensure that these individuals know they are interacting with AI if it is not apparent. For example, if you have placed a chatbot on your website for customer interaction, you must inform them that AI generates the responses before the conversation starts.

The results of AI work, such as images, videos, or text, must be labeled as such so that users know their origin. For example, if a magazine editorial uses AI to create articles and images, it must indicate during publication that AI generated these materials.

4. Generative AI

The act also addresses developers and users of generative AI models, such as ChatGPT or Gemini. One key requirement for them is the development and continuous updating of technical documentation, including information on training, testing, and evaluation results of the generative model. Moreover, developers must publish a detailed report on the content used to train the AI generative model.

This approach helps ensure high trust in the products and developers of generative models. It protects the intellectual property rights of individuals who provide materials for training AI models, promoting ethical and responsible AI use in the modern world.

5. Deep Fake

The development of deep fake technology allows for creating videos and photos that skillfully mimic reality, making it challenging to distinguish original content from artificially generated content. This poses serious threats, such as violations of personal privacy and other fundamental human rights.

According to the act’s provisions, individuals who distribute AI programs that generate or alter photos, videos, or audio using deep fake technology must indicate that this content is AI-generated. Violation of this norm can lead to significant fines. This approach ensures transparency and accountability in using deep fake technology, reducing the risk of misuse.

Responsibility

Under the act, each EU member state independently establishes rules on fines and other measures applied to individuals or organizations that violate the requirements of this document. Such measures may include warnings and non-monetary sanctions, determined based on the specific circumstances of the violation.

However, the act provides recommendations on the amount of fines that can be imposed. Specifically, non-compliance with prohibitions on AI practices may result in fines of up to €35,000,000. Meanwhile, non-compliance with requirements for high-risk systems can lead to fines of up to €15,000,000.

These significant fines aim to ensure a high level of responsibility and stimulate adherence to safety and ethical principles in the development and use of artificial intelligence.

When the norms will apply

On May 21 of this year, the AI Act was approved by the EU Council. After being signed by the presidents of the European Parliament and the EU Council, it will be published in the Official Journal of the EU. However, it will take effect 20 days after this publication.

The act will apply 24 months after it comes into force, with some exceptions. Specifically, obligations for high-risk systems will apply 36 months after it comes into force. This will allow businesses ample time to thoroughly prepare for the new requirements and ensure compliance.

Preparing businesses for the implementation of the AI Act

Before the AI Act comes into force, it is important to determine how prepared your business is for the new regulatory requirements. The earlier you start preparing, the easier it will be to adapt to the new requirements.

Conduct a detailed analysis to identify which aspects of your business may need adaptation to the new rules. Carefully analyze which AI products are used in your business. These could be high-risk systems, generative AI, or systems that will be prohibited once the act comes into force. Once you identify the AI systems in use, ensure they comply with the new safety and ethical standards. This may include updating operational algorithms to improve system efficiency and safety, increasing the transparency of AI operations for end-users, and developing control mechanisms to timely identify and eliminate potential issues in AI operation. It is also crucial to ensure the ethical use of the system, avoid discrimination, and protect user privacy. Regular security and compliance audits will help ensure the system meets established standards, while personnel training will enable effective AI use and management, as well as timely response to possible problems and risks.

Additionally, for the transparent and safe use of AI systems, it is important to develop terms, policies, and instructions that clearly define the principles of system operation, its functions, capabilities, limitations, and other aspects. For example, suppliers of high-risk AI systems must develop a quality management system. This system should be documented in the form of written policies, procedures, and instructions that are structured and easy to understand. These documents should include information on methods and procedures used for the development, quality control, and assurance of the AI system. Moreover, suppliers of high-risk AI systems must provide users with instructions that contain brief, complete, correct, and clear information that is relevant, accessible, and understandable. These instructions should include the developer’s contact information and the characteristics, capabilities, and limitations of the AI system.

When creating AI systems, it is also important to consider not only compliance with requirements but also self-protection. Since the development and use of such systems involve many participants, various legal issues need to be resolved. For example, it is essential to determine the extent to which the developer and user own the rights to the AI system or the content generated by it, who is responsible for errors in AI operation and incorrect interpretation of its results. These and other aspects can be regulated by written agreements, which will help ensure stability in relationships between all parties and avoid potential conflicts in the future.

Conclusion

The AI Act is an essential step in developing artificial intelligence regulation. It establishes requirements for AI’s safety, transparency, and ethical use, including prohibiting certain AI practices that violate human rights, requirements for high-risk systems used in critical areas, and General-Purpose AI. The new norms will apply to developers and users, importers, and distributors of AI systems that put AI products into operation or place them on the EU market, regardless of their location.

Given the complexity and scope of the new requirements established by the AI Act, the adaptation process may be lengthy. Therefore, it is essential to begin adapting to the new rules now. This may include analyzing the readiness for new standards, reviewing and updating internal work processes, policies, instructions, staff training, and continuous monitoring of changes. Such an approach will allow businesses to maximize the benefits that the changes can bring.

Although the introduction of new regulations may present a challenge for businesses, it also opens up opportunities for the safe development of innovations and increased consumer trust in technology.

Published on AIN.UA

Link to the article

Share

You may be interested

    Share

      We use cookies to improve the performance of the site and enhance your user experience.

      More information can be found in our Privacy Notice