Starting August 2, 2025, a new set of rules under the European AI Act will come into force. This phase specifically focuses on transparency and responsible use of generative AI models, such as ChatGPT, image generators, and music generators. But what does this actually mean? And why is it important for your organization?
What will change?
From August 2 onward, new obligations will apply to providers of so-called General Purpose AI (GPAI). This includes:
- Transparency about training data
Providers must publish summaries of their datasets. - Copyright compliance
Companies must demonstrate that their models comply with legal requirements. - Mandatory documentation
Including risk assessments and technical specifications.
For AI models with systemic risk (i.e., those with significant societal impact), even stricter requirements apply, such as incident reporting and ongoing risk evaluation.
In addition, limited-risk AI systems, such as chatbots, must clearly inform users that they are interacting with AI. That means no more ambiguity about whether content was generated by a human or a machine.
Why this matters
The AI Act is the first comprehensive legislation in the world that regulates AI based on risk. For organizations, this means not only more responsibility but also an opportunity to stand out with transparent, fair, and trustworthy AI applications. It’s a step toward ethical technology and stronger trust among customers, users, and stakeholders.
Ignoring these obligations is not an option. Fines for non-compliance can be as high as €35 million or 7% of global annual revenue.
What can you already do?
- Check whether the AI models you use fall under the new rules.
- Start early with preparing the required documentation.
- Involve legal, technical, and compliance teams in your preparation.
The message is clear: transparency is no longer optional, it’s a core condition for responsible AI usage. By acting now, you ensure your organization will not only be compliant but also future-proof.
Need help assessing the impact on your organization? Let us know, we’re happy to help.
Frequently Asked Questions about the AI Act
What exactly is the AI Act?
The AI Act is European legislation that regulates the use of artificial intelligence based on risk. The goal: safe, transparent, and trustworthy AI within the EU.
What happens on August 2, 2025?
From that date, new obligations will apply to developers and providers of generative AI models. Think of transparency about training data, risk assessments, and clear communication with users.
Does this also apply to AI systems that I ‘only use’ as a company?
Yes, in some cases. If you use an AI model that falls under GPAI obligations (e.g., via an API or platform), you as a user must also meet certain transparency and due diligence requirements.
What is meant by ‘limited-risk AI’?
These are AI applications that pose relatively low risks, such as chatbots or AI-generated texts. However, a disclosure obligation still applies: users must be informed that AI is involved, for example with a label like “This text was generated by AI”.
How do I know if my AI system falls under these rules?
That depends on the functionality, purpose, and impact of the system. If you work with generative AI or models used in multiple applications, chances are you fall under the AI Act.
What are the consequences if I don’t comply?
The European Commission can impose high fines, up to €35 million or 7% of global revenue. More importantly, failing to comply can damage your reputation and customer trust.
What can I do now to prepare?
Start by mapping out your AI applications. Gather the necessary documentation. Ensure your legal and technical teams are informed about the upcoming obligations. Starting on time is crucial.