Legal aspects of using artificial intelligence in gaming

Sergiy Barbashyn Attorney, managing partner of Barbashyn Law Firm
11 June, 2025 7 min for reading
11 June, 2025 7 min for reading

The issue of intellectual property protection covers all elements of game development: from code and images to soundtrack and design. As the use of AI intensifies, the issue of forming regulatory frameworks and market regulation tools is also becoming more relevant.

In this interview, we talked to attorney and president of the International Association for Artificial Intelligence and Business (AIEI) Sergey Barbashin about current global practices of AI regulation, Ukrainian policy in this area, and how it directly affects the game industry.

The impact of professional activity on the development of the game development industry

Well, if I start with the personal, video games have always attracted me and this is unlikely to change. From nightly trips to computer clubs, “life in lineage” to professional Dota play: a team, tournaments, and constant training. I even considered eSports through the prism of my professional career. But the desire to be close to the video game industry first led me to programming and then to IP law.

I am currently focused on two areas of activity. The first one is legal. I am the managing partner of the international law firm Barbashyn Law Firm. We specialize in the tech sector, in which we actively intersect with the gaming industry. Our “niche” is recognized by being included in the top 100 leading law firms in Ukraine according to the Yuridicheskaya Praktika – 2025 rating, as well as in the top 10 in IT and intellectual property.

The second area of my activity is the AI Ethics and Integrity International Association (AIEI), where I am the CEO. AIEI is dedicated to the responsible and ethical use of artificial intelligence. Our projects include international AI principles, an AI guide (articles, magazine, conference, and rating), and in the future, an AI risk assessment and training platform. I am interested in technological development, especially AI, and its impact on various spheres of life.

AI in gaming: practices and legal challenges for Ukrainian developers

Games Gathering is always cool and I am always happy to be a part of their events. My “mission there” is to talk about complex legal structures in simple language. I am pleased that the industry is taking a proactive approach and is already actively preparing for new requirements, in particular in the field of AI.

Legal issues in gaming are often complex. The focus should be on intellectual property (IP), corporate structure, data protection, interaction with marketplaces, etc. Recently

artificial intelligence has become a separate area. It is no longer just a part of technology but a full-fledged regulatory area that is actively developing in different jurisdictions.

My main advice for developers is to determine the target markets at the beginning and then assess the requirements of each market regarding the use of AI. From strict regulation in the EU to flexible principles in England, Japan, and other countries.

The following examples can be cited. Ukraine is guided by the European model, where there is a roadmap of the Ministry of Digital Transformation that provides for transparency, regulatory requirements, integration of EU rules and subsequent development of its own. I wonder whether it will end up being about strict regulation or flexible principles.

The EU has the AI Act, one of the world’s first comprehensive AI regulations that categorizes systems by risk levels. As a general rule, video games are among the lowest-risk systems, but it’s not that simple. For example, if a game uses generative AI to create faces that can be identified or perceived as real, or if NPCs demonstrate behavior that simulates emotionally sensitive or personalized interaction with the user, such a system may fall into the high-risk category under the AI Act.

In such cases, there are obligations to be transparent, properly labeled, maintain technical documentation, and ensure the possibility of external audit. The way out of the situation is high-quality documentation, disclaimers, and user agreements. I also hope that the regulator will soon provide clarifications on the scope and conflict issues. Because the rules have been issued, but it is not entirely clear how to work with them and what to do with gray areas. Prohibiting everything or eliminating features that could potentially improve the system to a higher level is not always the right way out.

If we talk about operations in the US or the UK, the approach there is currently more focused on flexible principles. This does not mean complete freedom of action, and there are certain industry regulations that should be taken into account. For example, when a user interacts with a chatbot or AI assistant in a gaming context, it is very important to ensure that such interaction is not interpreted by regulators as providing medical advice or having an impact on mental health, as this may fall under separate regulation.

Features of protection of various development elements

Code and artistic works, such as images, music, or animations, are protected by copyright in most countries without the need for registration, but the use of these works in AI datasets can be considered an infringement if the author has not given his or her consent. Artists have the right to demand compensation or a ban on the use of their works in such systems.

Generative AI systems, such as DALL-E or Midjourney, use large data sets for training, which often include works by artists. This raises the problem of possible copyright infringement, as works can be used without the permission of their authors. Proof of copyright infringement in AI generative systems is difficult, as the generated product (for example, an image) often does not have a clear resemblance to the original. This requires the creation of new legal instruments to regulate such cases.

To facilitate the protection of their rights, artists can register their works with national institutions, for example, in Ukraine – UKRNOVI, in the United States – the Library of Congress. Other options are protection technologies such as watermarks, digital signatures, or blockchain (e.g., NFT), which help to track the originality of works.

Let’s look at when these rights or monopoly on results arise. It is crucially important here whether it is created by a human or AI. In most countries, in order to protect the result with copyright, a person needs to have a significant role in the creation process.

In other words, if the code is entirely created by a human, protection is granted; if the human participation is significant, copyright applies; if the code is entirely created by AI, copyright will not apply in most countries. In such cases, special rights may apply, such as sue generis in Ukraine. Alternatively, you can navigate and work under the laws of a country where AI-generated objects can be protected by copyright, such as the UK.

Another important question is who owns the result and can prohibit its use. If the code is written by a human, the rights belong to the author or the client company or employer. As a rule, the contract contains a separate IP section that provides for the full transfer of rights and how it happens (automatically, after payment or signing an act).

In case of full AI generation, you should carefully read the terms of use of AI services. It is quite common to transfer rights to the user in case of a paid subscription. If it is an open-source solution, you should read the terms of their licenses. Quite often, it is not free use without restrictions, but there may be other cases: restrictions on commercial use of results, adding disclaimers, etc. AI in studio pipelines can create logos, branding, or interfaces that are protected as trademarks or industrial designs.

Trademarks, unlike copyrights, are subject to territorial protection, which requires their registration in each country, for example, a game logo created by AI should be registered in the countries where it is planned to distribute it. Before that, it is necessary to check whether the rights to the result of such development have been fully obtained and there are no restrictions.

To summarize, intellectual property protection in the AI era requires a comprehensive approach that combines legal, technological, and ethical tools. This allows studios not only to protect their assets but also to effectively use AI in pipelines while maintaining the trust of the creative community.

How to determine who owns IP rights?

A game, as an object of intellectual property, is mostly owned by the developer or publisher, depending on the terms of the contract. A player who only plays the game does not create separate IP content, but uses the license granted through the EULA (end user license agreement for software). However, if a player creates mods, maps, video reviews, or other content, they may have rights to that content, but those rights are limited by agreements with the developer or platform.

Let’s take a look at some of the agreements. For example, clause 6 (in particular, section 6.A) of the Steam Subscriber Agreement states that user-generated content (Workshop Contributions) is considered part of the “Subscriptions”.

Users who create such content grant Valve (the American company that develops and owns the Steam platform) and, if necessary, the game developers a license to use, modify and distribute this content. This includes mods, maps, or other content created for games that support the Steam Workshop. Valve has the right to modify content only to ensure platform compatibility or to improve gameplay if the content is accepted for use in the game.

In accordance with Section 6.2 of the Playstation™ Network Terms of Service and User Agreement, when a user creates and publishes content such as mods, maps, video reviews, or other materials (User Generated Content, or UGC), the user grants Sony Interactive Entertainment (SIE) a royalty-free, perpetual, worldwide license to use, distribute, copy, modify, display, and publish their content for any purpose, without further notice or compensation to themselves or to third parties.

Generative AI in SDKs: classification, approaches, and risks

The use of general-purpose models in games, in particular through SDKs in popular engines, creates not only technical possibilities but also specific legal obligations.

GPAIs are universal AI models capable of generating content, modeling behavior, and adapting logic. Their integration into gaming products is increasingly taking place through ready-made SDKs, for example, in Unity or Unreal.

The AI Act distinguishes GPAI as a separate object of regulation. If a model is trained on large computing capacities of more than 10²⁵ FLOPs or has a systemic impact on users, it falls under the enhanced control regime. This includes mandatory documentation, risk assessment, transparency, and cybersecurity.

Therefore, even if the model is integrated through the SDK, the developer is still responsible for its operation within the game. If AI affects the plot, NPC behavior, or adaptive complexity, the studio must understand how the model works, its limitations, and how to control the results of its use.

The future of AI in games: regulation, forecasts, and insights

Such forecasts only confirm that the need for clear and viable rules for AI is growing every day. But the key question is not how many regulations we will write, but what approaches we will take.

I advocate a position based on Global AI Culture Beyond Regulation. It means that before formally regulating something, one should pass at least a basic test: whether it needs to be regulated, whether it is necessary to regulate it now, whether it is necessary to regulate it in this area. In my opinion, there should be an internal understanding and culture of the market and its consumption. Just like in society, ethical norms exist and work even when they are not explicitly stated.

At the same time, there are critical areas that can have a significant impact on human rights, security, and critical infrastructure. In such areas, I support the idea of partial regulation and legislative guides. For example, security algorithms, personnel qualifications, internal and external interaction processes, etc. are described.

In the case of AI, this means certain frameworks that allow us to move forward without compromising the development of the field, but with stricter requirements for critical areas of our activity. It is also important that these soft requirements do not turn into overregulation of each process.

“Regulation should not harm development. That’s why the idea of self-regulation is close to me.”

In Ukraine, this is realized, for example, in the format of discussing and developing AI principles, as well as sandboxes. This is a practical tool that allows teams to test their solutions in a controlled environment, see potential risks, and receive feedback before the emergence of binding rules. I am involved in this process and consider it effective.

In a situation where regulation does not always keep pace with the pace of AI development, it is important to have guidelines that allow us to act responsibly now. General approaches and declarative principles are one such approach. I can cite the AIEI Declarative Principles as an example. These principles were developed with the participation of representatives of eight countries and are based on more than 12 authoritative sources of AI regulation and recommendations (UNESCO, OECD, EU AI Act, US Blueprint for an AI Bill of Rights, AI Action Summit Declaration 2025, etc.).

The above and similar principles are not a regulatory act, but this is why they remain flexible, adaptive, and available for wide application. Many companies already use them as a framework for internal policies, user agreements, content moderation, and risk assessment processes. This allows developers to form their own responsible position regardless of formal requirements that may still change or not cover the specifics of a particular industry.

About the AI Act and four-level risk classification

The AI Act introduces the classification of AI systems by risk level. This is a key tool for anyone creating or integrating AI solutions into products, including the gaming industry.

Level of risk Characteristics Implementation date and requirements
Unacceptable Systems that violate fundamental rights or pose a serious public risk (e.g., social scoring, manipulative interfaces, biometric categorization).  

February 2, 2025 – complete ban on use. This includes AI systems that manipulate, socially evaluate, or biometrically group individuals without legal grounds.

High Systems that have a significant impact on people’s lives, in particular in the areas of education, employment, healthcare, justice, law enforcement, or are security components of products covered by other EU legislation and may have a significant impact on security or human rights (in particular due to scale, automation of solutions, or user vulnerability).  

August 2, 2026, the requirements for high-risk systems in accordance with Article 6(2) and Annex III come into force. Documentation, monitoring and risk assessment are required.

August 2, 2027 application of Art. 6(1) for systems that are safety components (e.g., medical devices or transport).

Limited Systems that interact directly with the user (e.g., chatbots, text or image generators) that can give the impression that the user is dealing with a human.  

August 2, 2025, the requirement to clearly inform users that they are dealing with AI comes into force. Basic requirements for transparency, notification of state authorities, and fines also apply

Minimal AI solutions with minimal risk simple tools or auxiliary functions without a significant impact on the rights or safety of individuals. The Regulations do not set out mandatory requirements. However, compliance with voluntary ethical standards or guidelines is expected.

For a game developer, this can mean a different level of risk depending on the specific scenario of using AI. If the model is used to improve graphics or optimization, then it is usually a minimal risk. However, if AI generates content, interacts with the user, or influences the behavior of NPCs in the game environment, then, as we have already discussed, such cases can be considered limited or high risk. It is worth paying attention to the potential impact on the user and checking the compliance of each case separately in terms of AI compliance.

The AI Act came into force in 2024 and is being implemented in stages. Restrictions on prohibited systems are already in effect since February 2025, and requirements for General Purpose AI systems will come into force in August 2025. In 2026-2027, the main obligations for high-risk systems will gradually come into effect, for which developers will be primarily responsible, although a number of obligations are imposed on other market participants.

It is not only about technical compliance, but also about the ability to explain what your model does, where it interacts with the player, and what impact it can have. That is why it is worth not waiting for official requirements but checking risks before the release, removing functionality that is difficult to justify, and preparing supporting documentation (separately or as part of internal policies, agreements, or game descriptions).

Although there is already an existing AI act, there is no practice of its application and no practical guidelines. For example, if it is a game but uses AI functionality to emotionally test and influence the behavior of players, is it considered to be a high risk (and when). Because on the one hand, it is a game and the ultimate goal is to immerse players in the game process, but on the other hand, such functionality may have separate requirements at the level of a fitness tracker or even mental health screening.

To better deal with these and similar issues, the EU is introducing an AI desk, a platform to help businesses. A code of conduct on AI is also being developed. I am included in this project of the European Commission as one of the experts, but there are no final documents or recommendations yet. I hope that it will be implemented by the end of the year.

The Copyright Law and the Roadmap of the Ministry of Digital Transformation: Impact on AI Regulation

Ukraine has a Roadmap for AI regulation. This is not a regulatory act, but it performs an important function. Businesses get a clear idea of what rules may appear, what principles will be basic and what they should prepare for.

The AI team of the Ministry of Digital Transformation is focused on practice. It is not about general approaches, but about specific initiatives. Ukraine is involved in global processes, and this allows us to build regulation not in isolation, but taking into account what is already being formed at the level of leading countries.

One example is the launch of regulatory sandboxes. This is a tool that allows companies, especially startups, to test AI solutions in an environment where there is feedback but no risk of immediate sanctions for non-compliance. Within the sandbox, you can see weaknesses, adjust the product, and set the right framework even before the emergence of mandatory regulations. The format works and I am happy to help all startups that participate in it.

As for the copyright law, it does not cover most cases related to generative AI. But this is not only a matter of Ukrainian legislation. The basic construction of copyright does not correspond to what new models create. It is now important for developers to realize these gaps and close them through contractual frameworks, IP strategies, licensing, and internal policies. Obviously, there will be updates, but now it is worth working within the limits of what is actually controlled.

Secure game release: risk assessment and key specialists

The release of a game is a complex task. Many decisions here have legal implications that affect the product even before it appears on the market. If I were to formulate a basic list of areas to pay attention to, I would single out several main blocks.

1 block: intellectual property
This is the basis of the product. The team must clearly understand who owns the rights to the code, graphics, sound, story, or design. It is necessary to review all agreements with development participants, check licenses, secure rights in contracts, fix authorship, and take care of brand registration and protection in the relevant jurisdictions.

2 block: personal data protection
If a project works with users from the EU or the US, you need to comply with the requirements of GDPR, COPPA, or other local regulations. The formal existence of a privacy policy is only part of the task. It is important that the product truly meets the requirements for data processing.

3 block: AI compliance
If AI is used in a game, it is necessary to assess the level of risk created by a particular solution. It all depends on how AI works, what it affects, and whether it interacts with users. Each case should be analyzed separately.

4 block: corporate structure
It should be clearly defined who owns the company, where the revenues are directed, on what terms investors are attracted, and who is responsible for the final decision. This affects both the tax burden and contractual security.

5 block (optional): work with marketplaces

If a game is released on Google Play or the App Store, you need to take into account the requirements of the platforms regarding placement, content, access to data, and monetization.

Thus, the formula for a safe release is not a universal instruction but a clear elaboration of each area taking into account a specific game model. All key risk areas should be covered by relevant specialists. These are lawyers with experience in IP, data protection, AI regulation, contract law, as well as those who understand the specifics of game development and distribution platforms. Without this, it is difficult to talk about high-quality preparation of the product for release.

Published by gamedev DOU

Link to the article

Share

We use cookies to improve the performance of the site and enhance your user experience.

More information can be found in our Privacy Notice