Artificial intelligence in China: new regulation

China is clearly striving to become a world leader in AI. Many large Chinese companies like Alibaba, Tencent, Baidu and SenseTime are actively adopting AI technologies. However, until recently, a lot of AI issues have remained unresolved by the legislator.

Under the guidance of a special Chinese agency (Cybersecurity Administration of China or CAC), soon after the preliminary draft published in May 2023,  the Interim Measures on the Regulation of Generative AI came into effect on 15 August 2023 (1, 2). 

It is the first comprehensive AI regulation in China, adopted to develop the industry and regulate the rules for generative AI services.

Familiarisation with the Interim Measures will be particularly relevant for technology companies planning to launch AI products in China.

Scope of application    

The Temporary Measures apply to the use of generative artificial intelligence technology in the provision of services in the PRC. Therefore, non-residents providing relevant services within the PRC are also subject to the Temporary Measures. 

It is legitimate to ask what generative artificial intelligence is. According to Article 2 of the Provisional Measures, it is a technology capable of generating text, images, as well as audio, video and other content.

If a non-resident fails to comply with the Provisional Measures, the competent department of network information of the PRC shall notify the internal affairs authorities to take appropriate technical measures, which in practice may also result in the blocking of the service in the PRC.

New requirements 

The interim measures place a wide range of responsibilities on AI service providers (suppliers), including key aspects such as:

  1. Content moderation 

    AI service providers are responsible for AI-generated content (Article 9). Where 'illegal' material is discovered, providers must take immediate action to remove it (including taking down content), and to notify the competent authority of such incidents (Article 14).
     
  2. Training data from AI

    The training data used by AI to create content must be obtained from legitimate sources and must not infringe the intellectual property of third parties.
     
    It is a general requirement for providers to take measures to improve the quality of training data and to ensure its validity, accuracy, objectivity and diversity (Article 7).
     
  3. Labelling of AI-generated content

    The labelling of created content in the form of images and videos must be mandatorily labelled by the provider (Article 12). This mechanism directly addresses the problem of dipfakes.
     
  4. Establishment of a grievance mechanism

    Generative AI providers should also establish a clear and transparent mechanism for submitting complaints (report mechanism) and disclose the process for handling them and set a deadline for providing responses (Article 15).
     
  5. Protection of users' personal data

    AI service providers are obliged to collect only necessary personal data. They are also prohibited from unlawfully storing raw user input data and other records identifiable to individual users, as well as unlawfully disclosing such information to third parties (Article 11).

Responsibility

The May draft Interim Measures provided for liability of not more than CNY 100,000 (equivalent to approximately EUR 13,000) for failure to comply with the competent authority's requirements to remedy violations of the Interim Measures. 

However, in the final version of the Provisional Measures, specific amounts of fines were excluded. Nevertheless, the general rule on the application of administrative and criminal penalties for relevant offences was retained (Article 21).


Dear journalists, the use of materials from REVERA website in publications is possible only after our written permission. 

For approval of materials please contact e-mail: i.antonova@revera.legal or Telegram: https://t.me/PR_revera