
In a significant step toward enforcing the European Union’s sweeping Artificial Intelligence Act, the European Commission on Friday published a set of recommendations aimed at general-purpose AI developers. The guidance is designed to help companies understand and comply with the new law’s obligations – particularly those that apply to models identified as carrying systemic risks.
The move is widely seen as a response to growing concern from technology firms about the potential regulatory burdens of the AI Act and the risk of substantial financial penalties for noncompliance.
The AI Act, adopted in 2024 and entering into effect for foundation and high-risk models on August 2, 2025, introduces a tiered system of responsibilities based on risk level. Fines for violations could range from €7.5 million ($8.7 million) or 1.5% of annual turnover to as high as €35 million or 7% of global revenue, depending on the severity of the offense and the size of the company. The newly issued guidelines aim to offer greater legal clarity for companies working with general-purpose AI systems, often referred to as foundation models, such as those developed by OpenAI, Google, Meta, Anthropic, and Mistral.
According to the Commission, AI models classified as having systemic risk are those that utilize extremely high levels of computational power and can significantly influence public health, safety, fundamental rights, or democratic institutions. These models will be subject to heightened scrutiny and operational requirements. Obligations include rigorous model evaluations, risk mitigation strategies, adversarial testing, and mandatory reporting of significant incidents. In addition, companies must ensure robust cybersecurity protections to guard against theft, misuse, or manipulation of AI systems.
Open-Source Models
The guidance would lay out transparency requirements for foundation model providers. These include the creation of comprehensive technical documentation, the enforcement of copyright protections, and detailed disclosure of the training data sources used in algorithm development. The Commission further clarifies which obligations apply to open-source models released under free licenses, provided they meet certain transparency standards.
The recommendations are closely aligned with the EU’s General-Purpose AI Code of Practice, reinforcing the bloc’s intention to support innovation while safeguarding against abuse. The Code sets non-binding best practices that help operationalize the AI Act in a manner consistent with European values and ethical principles.
Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security, and Democracy, emphasized the Commission’s intent to foster both innovation and responsibility. “We are supporting AI actors, from start-ups to large developers, to innovate with confidence while making sure their models are safe, transparent, and consistent with European values,” she said.
The published guidelines are intended to be a reference for all stakeholders in the AI value chain – defining key legal terms such as ‘provider’ and ‘placing on the market,’ while outlining the responsibilities of those developing the most powerful AI models. By offering this clarification well ahead of the enforcement deadline, the Commission hopes to facilitate smoother implementation and reduce legal uncertainty across the European tech ecosystem.
Discover more from WIREDGORILLA
Subscribe to get the latest posts sent to your email.