AI Act: What an AI developer needs to know
The AI Act is the first comprehensive EU law on artificial intelligence (hereinafter — AI) and entered into force in August 2024. It regulates the use of AI based on a risk-oriented approach: the higher the risk to people’s rights and safety, the stricter the requirements. Non-compliance may result in fines of up to €35 million or 7% of a company’s global turnover.
The Act applies to anyone who develops, deploys, or uses AI systems in the EU, regardless of the company’s jurisdiction. It introduces four risk levels:
- Unacceptable risk — fully prohibited (social scoring, behavioural manipulation, real-time biometric identification).
- High risk — subject to strict requirements (healthcare, education, recruitment, critical infrastructure).
- Limited risk — transparency obligations (chatbots, generative AI).
- Minimal risk — no new rules (spam filters).
For high-risk systems, the following are mandatory: registration in an EU database, technical documentation, audits, data quality assurance, human oversight, and explainability of decisions. Generative AI must label content, comply with copyright requirements, and publish a summary of training data.
We have prepared a guidance note outlining the key provisions of the AI Act to help developers navigate the new requirements.
AI Act: What an AI developer needs to know
PDF | 1,18 MB
Authors: Artem Handriko, Daria Gordey.
Contact a lawyer for further information
Сontact a lawyer