How to minimise risks when creating software that generates content on demand from users?

Recently, artificial intelligence (AI) applications capable of generating content on their own have become popular among users. AI that automatically creates new images based on submitted images is in particular demand.

At the moment, there are already many applications and online services that generate income for developers. However, if there is insufficient legal and technical support for such applications, the developer may suffer reputational and financial losses due to errors made by AI in performing its function or due to the inability to control user requests.

One of the main problems of such applications, which EU and other lawmakers have been fighting for several years, is the generation and subsequent distribution of images, audio or video created by artificial intelligence, which uses human likenesses to create new materials without human consent (deepfake). Often such content may be of a prohibited nature, for example, pornographic images, images promoting violence or related to discrimination on various grounds (race, ethnicity, nationality, religion, disability, etc.).

What are the risks that a developer of image-generating software needs to consider?

Possible violations of the rules of trading platforms

App Store

General requirements from the App Store, which are stipulated in its user agreement - the application must not contain pornographic content or materials (text, graphics, images, photos, sounds, etc.), materials that are defamatory or offensive, promote violence, violate moral principles, advertise or promote illegal substances or services that are outside the law (child, sexual exploitation) or call for violation of the law.

Google Play Market

It is forbidden to publish applications that contain or promote material:

  • of a sexual nature (e.g. depicting sexual scenes or suggestive poses);
  • of a discriminatory nature (images advocating violence or inciting hatred towards any persons and social groups on the grounds of race, ethnicity, nationality, religion, gender, age, disability, veteran status, sexual orientation, gender identity and other grounds that may be the cause of systematic discrimination or marginalisation);
  • of a violent nature (e.g. realistic depictions or detailed descriptions of violent acts against a person or animal);
  • of a terrorist nature (images advocating terrorist activities, calling for violence and glorifying terrorist acts);
  •  in relation to tragic events (e.g., images show denial of a known tragic event);
  • related to bullying and threats (e.g. images showing bullying of victims of international or religious conflicts, publicly humiliating someone, etc.).

Google Play Market also has a number of requirements for apps with user-generated content (if that content is made available to other users, for example, if the app or service has a shared library to which images are uploaded).

Claims by persons whose rights may be infringed by the actions of the software itself or users

Based on practice, AI applications may face the following categories of claims:

  • the user uploaded a photo of a celebrity and/or a third party without their consent - in this case the claim comes from the third party;
  • the application performs actions with the image that are in any way offensive to users (e.g. enlarges/decreases/adds body parts without a corresponding request).

As a rule, in such situations, lawsuits are filed against those directly involved in the distribution of dipfakes, i.e. against the users of such applications, not the developers. At the same time, the developers of the AI with which the materials were created may suffer reputational losses.

Possible violations of individual country legislation

EU bodies are currently making efforts to protect the rights of the public from the spread of dipfakes. In particular, the EU is improving legislation on online safety (Artificial Intelligence Act, Digital Services Act).

At the moment all EU proposals in this part are aimed at combating those who directly distribute dipfakes, but in the future there may be innovations for platforms and applications (additional obligations and responsibility for failure to fulfil them).

What measures can be taken to minimise the risks?

Legal measures
  • Strengthening the user agreement, e.g. by various guarantees, tightening user responsibility and relaxing developer responsibility;
  • Setting an age limit in the application (by yourself or with the help of special organisations, e.g. Entertainment Software Rating Board (ESRB) for the Americas; Pan-European Game Information (PEGI) for Europe and the Middle East and others).
Technical measures

Given that the EU has recently been tightening regulation on AI apps and placing more and more additional duties on developers to prevent the distribution of prohibited content, there is a case for already taking steps now to curb this, such as detailing the various notices, consents and warnings, the ability for users to complain, etc.


Dear journalists, use of material from the REVERA website in publications is only possible with our written permission. 

To approve material, please contact i.antonova@revera.legal or Telegram: https://t.me/PR_revera