The European Parliament has passed the first AI Act: what are business requirements?
Content of the article
In a world where artificial intelligence is rapidly becoming a part of our daily lives and a key element in the development of business and technology, regulating its use has become a critical task for lawyers and legislators. In May of this year, the EU Council approved the world’s first legislative act on artificial intelligence—the Artificial Intelligence Regulation, also known as the AI Act.
In an expert article for AIN.UA, managing partner of Barbashyn Law Firm, Sergiy Barbashyn, and lawyer Anastasia Vladiyeva discussed the main provisions of the regulation and provided practical recommendations for businesses on how to prepare for the enactment of this legislation.
The regulation aims to establish standards for developing, implementing, and using AI systems to ensure ethical, fair, and safe use of this technology.
Where and to whom the new law will apply
The adoption of the AI Act in the European Union has far-reaching implications that extend beyond the territorial boundaries of the EU. One of its key features is its extraterritorial effect. It will apply to all individuals or companies that put AI products into operation or place them on the EU market, regardless of where these companies are located. For example, if a company in the US develops an AI chatbot, but some of its users are in the EU, the company must comply with the regulation. This is why the act is sometimes referred to as GDPR 2.0.
It is also important to note that the AI Act includes provisions that apply not only to AI system developers but also to importers and distributors who provide access to such systems on the EU market.
Additionally, the act contains provisions aimed at users who utilize AI in their professional activities. For example, suppliers and users of AI systems must ensure that their personnel and anyone working with these systems have sufficient knowledge of AI.
What businesses should pay attention to
We’ve established who the act will apply to, so now, let’s look at the main innovations it will bring.
The AI Act defines new standards and requirements for AI systems depending on their risk level. Let’s examine individual provisions of the act and understand the specific changes and innovations.
1. Prohibited Systems
The act includes a list of AI practices that will be prohibited because they pose an unreasonable threat to fundamental human rights. For instance, using AI systems that intentionally manipulate or deceive to influence a person’s ability to make informed decisions significantly is prohibited. For example, suppose a company releases a mobile application that uses artificial intelligence to personalize advertisements, and this application employs subliminal AI techniques, causing users to spend excessive time or make unnecessary purchases. In that case, it can significantly impact their ability to make informed decisions, causing financial and emotional harm.
AI emotion recognition (EAI) systems are also prohibited in the workplace or educational institutions, except when these systems are used for medical or security purposes. For example, suppose a company uses an AI system that automatically analyzes facial expressions, voice tone, or even movements to determine employees’ psychological state. At the same time, it may help companies promptly respond and prevent adverse health and productivity consequences. In that case, it simultaneously invades personal privacy and violates employees’ privacy.
2. High-Risk systems
The act also outlines requirements for high-risk AI systems. These include AI systems used in critical areas: medicine, education, law enforcement, judiciary, and political activities.
The AI Act sets specific requirements for suppliers of these systems. For example, technical documentation must be prepared before a high-risk AI system is marketed or put into operation. Moreover, this documentation must be continuously updated according to the changes made in the system throughout its lifecycle.
The act also imposes obligations on AI system users who use them professionally. Before using a high-risk AI system in the workplace, employers must inform their employees.
3. Transparency in Usage
The act also includes specific transparency requirements, which are crucial for avoiding the negative consequences of AI use.
High-risk AI systems must be designed and developed so that their operation is understandable to users. This way, users can adequately understand the data they receive from the system and use it appropriately.
Furthermore, suppliers of AI systems intended for direct interaction with individuals must ensure that these individuals know they are interacting with AI if it is not apparent. For example, if you have placed a chatbot on your website for customer interaction, you must inform them that AI generates the responses before the conversation starts.
The results of AI work, such as images, videos, or text, must be labeled as such so that users know their origin. For example, if a magazine editorial uses AI to create articles and images, it must indicate during publication that AI generated these materials.
4. Generative AI
The act also addresses developers and users of generative AI models, such as ChatGPT or Gemini. One key requirement for them is the development and continuous updating of technical documentation, including information on training, testing, and evaluation results of the generative model. Moreover, developers must publish a detailed report on the content used to train the AI generative model.
This approach helps ensure high trust in the products and developers of generative models. It protects the intellectual property rights of individuals who provide materials for training AI models, promoting ethical and responsible AI use in the modern world.
5. Deep Fake
The development of deep fake technology allows for creating videos and photos that skillfully mimic reality, making it challenging to distinguish original content from artificially generated content. This poses serious threats, such as violations of personal privacy and other fundamental human rights.
According to the act’s provisions, individuals who distribute AI programs that generate or alter photos, videos, or audio using deep fake technology must indicate that this content is AI-generated. Violation of this norm can lead to significant fines. This approach ensures transparency and accountability in using deep fake technology, reducing the risk of misuse.
Responsibility
Under the act, each EU member state independently establishes rules on fines and other measures applied to individuals or organizations that violate the requirements of this document. Such measures may include warnings and non-monetary sanctions, determined based on the specific circumstances of the violation.
However, the act provides recommendations on the amount of fines that can be imposed. Specifically, non-compliance with prohibitions on AI practices may result in fines of up to €35,000,000. Meanwhile, non-compliance with requirements for high-risk systems can lead to fines of up to €15,000,000.
These significant fines aim to ensure a high level of responsibility and stimulate adherence to safety and ethical principles in the development and use of artificial intelligence.
When the norms will apply
Conclusion
The AI Act is an essential step in developing artificial intelligence regulation. It establishes requirements for AI’s safety, transparency, and ethical use, including prohibiting certain AI practices that violate human rights, requirements for high-risk systems used in critical areas, and General-Purpose AI. The new norms will apply to developers and users, importers, and distributors of AI systems that put AI products into operation or place them on the EU market, regardless of their location.
Given the complexity and scope of the new requirements established by the AI Act, the adaptation process may be lengthy. Therefore, it is essential to begin adapting to the new rules now. This may include analyzing the readiness for new standards, reviewing and updating internal work processes, policies, instructions, staff training, and continuous monitoring of changes. Such an approach will allow businesses to maximize the benefits that the changes can bring.
Although the introduction of new regulations may present a challenge for businesses, it also opens up opportunities for the safe development of innovations and increased consumer trust in technology.
Published on AIN.UA
You may be interested
We use cookies to improve the performance of the site and enhance your user experience.
More information can be found in our Privacy Notice