AI Regulation in the European Union: Analyzing the AI Act’s Impact on Foundation Model Development and Deployment

AI Regulation in the European Union: Analyzing the AI Act's Impact on Foundation Model Development a

Introduction: A Regulatory Watershed for Artificial Intelligence

In April 2024, the European Parliament formally adopted the Artificial Intelligence Act (AI Act), establishing the world’s first comprehensive, horizontal legal framework for AI.1 This landmark legislation, which will be fully applicable 24 months after its entry into force, represents a paradigm shift from voluntary ethical guidelines to enforceable legal obligations. Its risk-based, tiered approach aims to foster trustworthy AI while mitigating societal harms. For the AI research and development community, particularly those working on foundation models (FMs)—large-scale, general-purpose AI systems trained on vast datasets—the Act introduces a novel and complex regulatory regime. This article analyzes the AI Act’s specific provisions for foundation models and general-purpose AI (GPAI), examining their potential impact on innovation, compliance burdens, and the global trajectory of advanced AI development.

The AI Act’s Risk-Based Architecture and Its Novelty

The AI Act categorizes AI systems based on the level of risk they pose, from unacceptable practices (e.g., social scoring) to high-risk applications (e.g., in critical infrastructure, education, or employment), with limited and minimal risk tiers below.2 While this structure primarily targets specific downstream applications, the Act’s most innovative—and debated—component is its separate, horizontal regulation of the models that power them. Recognizing that the capabilities and potential risks of AI are increasingly concentrated at the foundational model layer, the EU legislator introduced specific rules for GPAI models and, within that category, an even stricter regime for models deemed to carry “systemic risk.”

AI Regulation in the European Union: Analyzing the AI Act's Impact on Foundation Model Development and Deployment — illustration 1
AI Regulation in the European Union: Analyzing the AI Act’s Impact on Foundation Model Development and Deployment — illustration 1

Defining General-Purpose and Foundation Models

The Act defines a GPAI model 3 This encompasses the broad class of foundation models, such as large language models (LLMs) and multimodal models, which are characterized by their extensive training data and emergent capabilities. The legal text deliberately avoids the term “foundation model,” opting for the functional definition of GPAI, but the intent to regulate this technological layer is unambiguous.

The Two-Tiered Obligations for GPAI and Systemic-Risk Models

The regulatory requirements for GPAI providers are bifurcated, creating a tiered system of obligations.

AI Regulation in the European Union: Analyzing the AI Act's Impact on Foundation Model Development and Deployment — illustration 3
AI Regulation in the European Union: Analyzing the AI Act’s Impact on Foundation Model Development and Deployment — illustration 3

Tier 1: Obligations for All GPAI Model Providers

All providers placing GPAI models on the EU market, regardless of their origin, must comply with a baseline set of requirements:4

  • Technical Documentation & Transparency: Create and maintain detailed technical documentation covering the model’s architecture, capabilities, limitations, and the data used for training, to be provided to downstream deployers.
  • Copyright Compliance: Implement a policy to respect EU copyright law, including publishing a sufficiently detailed summary of the training data subject to copyright.
  • Information Provision: Supply downstream providers with the necessary information to enable their compliance with the AI Act’s high-risk system requirements.

Tier 2: The “Systemic Risk” Regime

A subset of GPAI models is subject to far more stringent rules. A model is classified as posing systemic risk if it is trained with a total computing power exceeding 10^25 FLOPs (floating-point operations).5 The European Commission can also designate models as systemic-risk based on other criteria, such as the number of business users or model parameters. For these models, providers must adhere to additional obligations:

  • Conduct Model Evaluations: Perform and document adversarial testing (e.g., “red-teaming”) to identify and mitigate systemic risks.
  • Assess and Mitigate Systemic Risks: Proactively evaluate and manage potential risks at the EU-wide level, including those related to health, safety, fundamental rights, and societal democracy.
  • Ensure Robust Cybersecurity: Protect the model’s integrity against unauthorized access or manipulation.
  • Report Serious Incidents: Notify the European Commission and relevant national authorities of any significant malfunctions or breaches.
  • Ensure Energy Efficiency: Report on the model’s energy consumption and overall efficiency.

These models will also be subject to direct oversight by a newly established AI Office within the European Commission, which will have the authority to request information, conduct evaluations, and ensure compliance.6

Impact on Foundation Model Development: Compliance as a Design Constraint

The AI Act fundamentally alters the development environment for foundation models intended for the EU market. Compliance is no longer a post-hoc consideration but a design constraint integrated into the model lifecycle.

Increased Overhead and the “Brussels Effect”

The documentation, evaluation, and reporting requirements will impose significant administrative and operational costs on FM developers. For smaller research labs and open-source initiatives, these fixed compliance costs could create a barrier to entry, potentially consolidating the market around well-resourced, incumbent firms that can absorb the regulatory overhead.7 However, the global reach of the AI Act—through its extraterritorial application to providers outside the EU—may trigger a “Brussels Effect,” whereby global companies align their worldwide practices with the EU standard to streamline operations.8 This could diffuse the Act’s transparency and evaluation norms globally.

The Open-Source Conundrum

The Act’s treatment of open-source GPAI models has been a point of intense debate. While the final text provides limited exemptions for free and open-source models released under a license allowing for “use, modification, and distribution,” these exemptions do not apply to models deemed to pose systemic risk.9 Furthermore, the obligation to provide a detailed training data summary remains. This creates uncertainty for open-source developers, who may lack the legal and administrative resources to ensure full copyright compliance, potentially chilling the open dissemination of powerful models.

Impact on Deployment and the Downstream Ecosystem

The Act’s influence extends beyond developers to the vast ecosystem of companies that fine-tune and deploy FMs.

Liability and the Value Chain

The AI Act clarifies liability within the AI value chain. Downstream providers who significantly modify a GPAI model or integrate it into a high-risk AI system assume the responsibilities of a “provider” under the Act.10 This incentivizes FM developers to provide comprehensive technical documentation to their enterprise customers. Conversely, deployers of high-risk AI systems that incorporate GPAI models must conduct a fundamental rights impact assessment, a process that will be heavily dependent on the transparency information flowing upstream from the model provider.11

Standardization and the Market for Conformity

The Act will spur the creation of harmonized standards for model evaluation, cybersecurity, and energy reporting. Compliance with these standards will provide a presumption of conformity with the law. This is likely to create a new market for auditing, testing, and certification services, similar to the ecosystem that emerged around the General Data Protection Regulation (GDPR).12 The quality and interoperability of these standards will be critical for ensuring a coherent single market and avoiding fragmentation.

Unresolved Challenges and Future Outlook

While the AI Act provides a comprehensive framework, significant implementation challenges remain.

  • The FLOPs Threshold: The 10^25 FLOPs threshold for systemic risk is a crude, easily gameable proxy for capability and risk. As algorithmic efficiency improves, models with lower compute budgets may achieve similar capabilities, potentially evading the stricter tier.
  • Enforcement Capacity: The nascent AI Office and national competent authorities must build immense technical expertise to effectively oversee fast-evolving foundation models, a task for which traditional regulatory bodies are not inherently equipped.
  • International Alignment: The EU’s approach diverges from the more sectoral, voluntary frameworks emerging in the United States and the United Kingdom.13 This regulatory divergence could complicate global research collaborations and market access, though it may also establish the EU as a de facto global standard-setter.

Conclusion: Balancing Innovation and Trust in the Age of Foundation Models

The EU AI Act represents a bold and unprecedented attempt to govern the foundational layer of the AI stack. By imposing legally binding obligations on GPAI and foundation model providers, it seeks to internalize the societal externalities of advanced AI development, promoting transparency, accountability, and risk mitigation. The impact on the industry will be profound, shifting resources toward compliance, formalizing evaluation practices, and potentially reshaping the competitive landscape. The success of this regulatory experiment will hinge on its flexible and proportionate implementation. If the standards are pragmatic and the oversight is technically competent, the Act could foster a European ecosystem of trustworthy, human-centric AI. If it is applied rigidly or creates insurmountable barriers for open innovation, it may stifle European competitiveness while pushing cutting-edge development elsewhere. The AI Act is not the final word on AI governance, but it is undoubtedly the first major chapter in the era of regulated foundation models.


1 European Parliament. (2024). Artificial Intelligence Act: MEPs adopt landmark law. Press Release.
2 European Parliament, Council of the European Union. (2024). Artificial Intelligence Act. Article 5-7.
3 Ibid. Article 3(63).
4 Ibid. Article 53.
5 Ibid. Article 51.
6 Ibid. Article 64a.
7 Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law & Security Review.
8 Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
9 AI Act, Article 2(5a).
10 Ibid. Article 25.
11 Ibid. Article 27.
12 Selbst, A. D. (2021). An Institutional View of Algorithmic Impact Assessments. Harvard Journal of Law & Technology.
13 White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.; UK Government. (2023). A pro-innovation approach to AI regulation.

Related Analysis