Legal Challenges in Developing Artificial Intelligence–Based Products: What Businesses Need to Consider Today
- 1. Who owns the copyright in AI-generated content?
- 2. Use of protected data for model training
- 3. Personal data, confidentiality and AI security
- 4. Liability for AI actions
- 5. Algorithmic bias and discrimination
- 6. Ethical challenges and behavioural manipulation
- 7. Specific risks in the games industry
- 8. Quality and operational failures
- How REVERA can help
- Contact a lawyer for further information
Artificial Intelligence (AI, hereinafter “AI”) is rapidly transforming business models and opening up new opportunities. However, alongside technological advantages, companies face significant legal, regulatory and reputational risks.
From copyright in AI-generated content to data protection and liability for algorithmic decisions, each of these topics requires close attention, and the absence of a well-considered legal strategy may lead to litigation, fines and product blocking.
Below we consider the key challenges in developing AI-based products and practical recommendations for business.
1. Who owns the copyright in AI-generated content?
Today, no legislation recognises AI as the author of a work. This creates legal uncertainty: who owns the rights to the content created — the model developer, the user, or the company?
The absence of clear rules creates risks for business:
- inability to effectively protect AI content from copying;
- disputes with users and partners over rights to the results of generation;
- reduced investment attractiveness of the product.
In such conditions of uncertainty, where protecting content from copying is virtually impossible, competitors can freely use the created materials, and judicial protection proves difficult.
Practice:
To minimise risks, a company is advised to:
- document the process of creating AI content;
- develop and implement an internal AI usage policy;
- set out the allocation of rights in contracts, licences and Terms of Use.
2. Use of protected data for model training
Training AI models on copyright-protected materials without permission remains one of the most acute issues.
Legislation in different jurisdictions offers different approaches: in the EU, an opt-out mechanism applies for rightholders, but its application to commercial AI remains contentious. In the US, dozens of lawsuits against AI developers show that using content without a licence can lead to multi-million claims.
Recent cases in global practice confirm this:
The New York Times v. OpenAI and Microsoft — the publisher alleges that millions of its articles were used to train ChatGPT without permission, which infringes copyright and threatens the business model of quality journalism.
UMG Recordings v. Suno — major music labels accuse the AI service Suno of using protected sound recordings to train a generative model, which, in their view, constitutes a direct copyright infringement.
Conclusion for business:
Ignoring content rights can lead not only to lawsuits but also to reputational losses.
3. Personal data, confidentiality and AI security
AI systems actively use personal data for training and adapting solutions. However, this often occurs without sufficient legal basis, which violates the requirements of the GDPR, the CCPA and similar laws. Moreover, AI creates new security threats:
- Model inversion — the ability to extract the original data from a model;
- Prompt injection — manipulation of inputs to gain access to a company’s internal information.
- The consequences of such breaches can be catastrophic:
- fines;
- data leaks;
- reputational damage.
Recommendations:
To avoid this, it is necessary to:
- carry out a DPIA (Data Protection Impact Assessment);
- restrict AI access to sensitive information;
- update the privacy policy;
- implement technical safeguards.
In addition, global practice shows increased regulation of AI and data protection: the EU has already implemented the AI Act, and a number of countries are developing their own laws, including Belarus. For companies, this means the need to consider cross-border data transfer risks and to adapt internal policies to new requirements in advance.
4. Liability for AI actions
Who is responsible for an algorithm’s error? The law does not yet provide an unambiguous answer.
As a general rule, liability lies with the company providing the end service. For example, if your product is integrated with a neural network API, it is you — not the model provider — who will be liable to the client.
This creates serious risks: it is difficult to prove a causal link between an AI error and harm, and there are no uniform standards yet.
What business should do:
- set out the allocation of liability in contracts and user agreements;
- use liability limitation clauses (disclaimers);
- for systems that fall into the high-risk category, ensure compliance with the AI Act requirements.
5. Algorithmic bias and discrimination
AI learns from data and therefore may reproduce their bias. In HR, healthcare, fintech or education this is particularly critical (for example, it may lead to discrimination based on age, gender or race).
Court cases (for example, Harper v. Sirius) already confirm the reality of these risks. Breaches of anti-discrimination legislation may entail both reputational losses and real fines.
Therefore, it is necessary to test models for bias, document the results, adjust the algorithms, and implement mechanisms for explainability of decisions.
6. Ethical challenges and behavioural manipulation
AI is capable of influencing users’ preferences and shaping their decisions — especially through recommendation algorithms and content generation. This opens up scope for manipulation for commercial and political purposes. A lack of transparency may lead to allegations of unfair practices and breaches of the principles of informed choice.
It is recommended to:
- label AI content;
- inform users about interaction with AI;
- implement internal ethical standards.
7. Specific risks in the games industry
Video games are one of the areas where AI is currently used most actively: generating dialogue, NPC behaviour, visual content. But alongside this, game development faces unique risks: copyright infringement, processing of minors’ data, non-compliance with platform requirements (Steam, Epic, Xbox).
Ignoring these aspects may result in the product being blocked or the game being removed from the store.
Therefore, it is necessary to document data sources, indicate the use of AI in marketing materials, implement content moderation, and comply with platform policies.
8. Quality and operational failures
AI is not perfect: it can produce inaccurate or illogical outputs, especially in critical areas — for example, medicine, finance, compliance. Technical failures can paralyse business processes if fallback scenarios are not provided.
Algorithm errors may lead to financial losses and legal claims.
To minimise these threats, it is necessary to implement mechanisms to verify data accuracy, test models regularly, provide fallback scenarios in the event of system failure, and organise ongoing quality control of AI performance.
Thus, risks associated with the use of AI have already become part of companies’ day-to-day practice. Effective management requires a comprehensive approach.
Solutions:
- development of internal regulations;
- proper structuring of contractual relationships;
- compliance with applicable legal norms;
- implementation of ethical principles;
- regular model testing;
- data quality control;
- fallback mechanisms and human oversight.
This approach ensures not only the legal resilience of the business but also builds trust in the product among clients and partners.
What this means for business
AI legal risks are no longer “future regulation” but the reality of operational activity. Companies that build the legal and ethical architecture of AI products in advance gain a competitive advantage and market trust.
How REVERA can help
The REVERA team supports companies at all stages of developing and implementing AI products:
- legal audit of AI solutions;
- structuring rights to data and AI content;
- GDPR and AI Act compliance;
- drafting an internal AI usage policy and user terms;
- support in disputes and regulatory inspections.
Authors: Darya Gordey, Artem Khandriko.
Contact us to obtain advice on AI legal risks or to conduct a legal audit of your AI product.
Contact a lawyer for further information
Contact a lawyer