Introduction: The Imperative of Governing Intelligence
The rapid ascent of artificial intelligence, particularly foundation models and generative AI, presents a profound governance challenge for democratic societies. These technologies promise significant economic and social benefits but concurrently introduce risks to fundamental rights, market competition, and democratic integrity. How states choose to regulate this technological force reflects deeper constitutional values, legal traditions, and economic philosophies. The regulatory approaches emerging from the European Union and the United States represent two distinct, influential paradigms. The EU, with its ex ante risk-based regulatory model, contrasts sharply with the U.S.’s sectoral, ex post enforcement framework. This comparative analysis examines the philosophical underpinnings, key legislative instruments, and potential global implications of these divergent paths in AI governance.
Philosophical and Constitutional Foundations
The regulatory divergence between the EU and U.S. is not incidental but rooted in foundational legal and political principles. The European approach is fundamentally precautionary and rights-based. It emanates from the EU’s constitutional commitment to a “high level of protection” for health, safety, and fundamental rights as enshrined in its treaties1. This tradition prioritizes collective security and the prevention of harm, viewing stringent regulation as a prerequisite for building public trust and a stable single market. Conversely, the American model is anchored in principles of innovation primacy, federalism, and a First Amendment culture skeptical of prior restraints on speech and technology2. The U.S. system often treats technological development as a form of protected expression and economic activity, preferring to address harms after they materialize through litigation and targeted enforcement rather than preemptive, horizontal rulemaking.

The European Union: The Risk-Based, Ex Ante Model
The EU’s strategy crystallizes in the Artificial Intelligence Act (AI Act), the world’s first comprehensive horizontal regulatory framework for AI. Adopted in March 2024, it establishes a classification system that prohibits certain AI practices deemed unacceptable, strictly regulates high-risk AI systems, and imposes lighter transparency obligations on limited-risk systems like chatbots3.
Key characteristics of the EU model include:

- Risk-Based Tiering: Legal obligations are directly proportional to the perceived risk level of an AI application. High-risk systems (e.g., in critical infrastructure, employment, or law enforcement) face stringent requirements for conformity assessments, data governance, and human oversight.
- Ex Ante Compliance: Providers of high-risk AI must demonstrate compliance before their systems enter the EU market, involving technical documentation, quality management systems, and in some cases, third-party conformity assessment.
- General-Purpose AI (GPAI) Rules: A landmark feature is the specific regulation of foundation models and GPAI. Providers of models with “high-impact capabilities” face additional obligations, including model evaluations, systemic risk assessments, and incident reporting to the newly established AI Office4.
- Centralized Enforcement: While implementation involves member states, the European Commission’s AI Office plays a central role in coordinating enforcement, particularly for GPAI, ensuring a unified regulatory front.
The United States: The Sectoral, Ex Post Model
The U.S. approach is decentralized, relying on a patchwork of existing authorities, voluntary frameworks, and state-level initiatives. There is no federal AI law analogous to the AI Act. Instead, governance is primarily driven by executive action, agency guidance, and the adaptation of sector-specific laws.
Key characteristics of the U.S. model include:
- Executive Order 14110: The Biden Administration’s October 2023 Executive Order on “Safe, Secure, and Trustworthy AI” is the cornerstone of federal policy. It directs federal agencies to use existing authority to manage AI risks, setting standards for safety, security, and equity5. Its power is largely limited to the federal government’s own procurement and grant-making activities.
- Agency-Led Guidance: Sectoral regulators like the Federal Trade Commission (FTC), Equal Employment Opportunity Commission (EEOC), and Consumer Financial Protection Bureau (CFPB) have issued guidance and initiated enforcement actions applying existing consumer protection, anti-discrimination, and fair lending laws to AI systems6.
- Voluntary Commitments: The White House has secured voluntary safety and security commitments from leading AI companies, emphasizing a cooperative, non-regulatory approach to frontier model development.
- State-Level Innovation: States like California, Colorado, and Illinois are enacting their own laws, particularly concerning AI in hiring, biometric privacy, and automated decision-making, creating a complex regulatory landscape for national operators.
Comparative Analysis: Strengths and Criticisms
Each model presents distinct advantages and faces significant critiques, often mirroring the weaknesses of the other.
The EU AI Act: Comprehensive but Complex
Strengths: The AI Act provides legal certainty and a clear compliance roadmap for companies operating in the EU. Its focus on fundamental rights offers robust protections against algorithmic discrimination and surveillance. By regulating GPAI, it attempts to govern the technological layer driving downstream applications, a novel and ambitious regulatory target.
Criticisms: Critics argue the regulation is overly bureaucratic, potentially stifling innovation, especially for startups lacking resources for compliance. The risk classification system may be too rigid to adapt to rapidly evolving technologies. Furthermore, the Act’s extraterritorial application (affecting any provider placing AI systems on the EU market) sets a de facto global standard, raising questions of regulatory imperialism7.
The U.S. Approach: Agile but Fragmented
Strengths: The U.S. framework offers flexibility, allowing regulators to adapt existing laws to new technological contexts without awaiting slow-moving legislation. This can foster a dynamic environment for innovation. The focus on sector-specific application allows for nuanced rules tailored to distinct contexts like healthcare or finance.
Criticisms: The lack of a comprehensive federal law creates a patchwork of state regulations, complicating compliance for national firms. Reliance on ex post enforcement means harms may occur before regulatory intervention. Voluntary commitments lack legal enforceability, creating a reliance on corporate goodwill. This fragmentation may also undermine the U.S.’s ability to shape global norms compared to the EU’s unified voice.
Global Implications and the “Brussels Effect”
The EU’s first-mover advantage with the AI Act is likely to trigger a “Brussels Effect,” whereby multinational corporations standardize their global operations to the most stringent regulation—in this case, the EU’s8. Countries from Canada to Brazil are drafting AI laws that borrow heavily from the AI Act’s risk-based architecture. The U.S., while promoting its vision through forums like the G7 Hiroshima AI Process and the U.S.-EU Trade and Technology Council, risks ceding normative influence if it cannot present a coherent, legislatively-backed alternative model. The interplay between these two approaches will define the emerging global governance landscape, with many jurisdictions likely adopting hybrid models.
Conclusion: Converging on Outcomes, Diverging on Methods
The comparative analysis reveals that while the European Union and the United States share broad goals—promoting trustworthy, safe, and rights-respecting AI—their methodologies reflect deep-seated institutional and cultural differences. The EU’s comprehensive, ex ante regulation seeks to architect a controlled ecosystem for AI development. The U.S.’s adaptive, sectoral, and ex post approach aims to govern through market discipline and legal precedent. Neither model is without peril: the EU risks regulatory overreach and innovation chill, while the U.S. risks regulatory gaps and inconsistent protection. As AI capabilities advance, pressure may mount for greater convergence, perhaps with the U.S. adopting more targeted federal legislation and the EU streamlining its compliance mechanisms. The ultimate test for both democracies will be whether their chosen frameworks can successfully mitigate the societal risks of AI without extinguishing its transformative potential for human progress.
1 Consolidated Version of the Treaty on the Functioning of the European Union, Article 169 and Charter of Fundamental Rights of the European Union, 2012 O.J. C 326/391.
2 Citron, D. K., & Pasquale, F. (2014). The Scored Society: Due Process for Automated Predictions. Washington Law Review, 89, 1-33.
3 European Parliament and Council. Regulation (EU) 2024/… laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). (2024).
4 Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law & Security Review, 42, 105-123.
5 The White House. Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. (October 30, 2023).
6 Federal Trade Commission. (2021). Aiming for truth, fairness, and equity in your company’s use of AI. FTC Business Blog.
7 Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
8 Ibid.
