Beta
ComplianceReading · ~3 min · 82 words deep

EU AI Act

EU law that classifies AI systems by risk level (unacceptable, high, limited, minimal) and sets obligations for each tier.

TL;DR

EU law that classifies AI systems by risk level (unacceptable, high, limited, minimal) and sets obligations for each tier.

Level 1

The EU AI Act became law in 2024 with a phased implementation through 2027. It bans certain AI uses (social scoring, untargeted biometric scraping), imposes strict obligations on "high-risk" AI (medical devices, hiring, law enforcement), and requires transparency for generative AI (disclosure of AI-generated content, training data summaries). General-Purpose AI (GPAI) models with systemic risk (training FLOPs above 10^25) face extra obligations: model evaluations, cybersecurity, and incident reporting.

Level 2

The AI Act has four risk categories. Unacceptable risk: banned outright. High risk: conformity assessment, risk management system, data governance, documentation, transparency, human oversight, accuracy/robustness/cybersecurity requirements. Limited risk: transparency obligations (disclose AI interaction). Minimal risk: no specific obligations. GPAI providers (OpenAI, Anthropic, Google, Mistral) must provide technical documentation, copyright compliance summaries, and downstream integration guidance. Systemic-risk GPAI (GPT-4 class, Claude, Gemini Ultra, Llama 3.1 405B+) faces additional evaluation and reporting. Fines reach €35M or 7% of global revenue · higher than GDPR.

Level 3

Article 6 defines high-risk AI via two pathways: Annex I (regulated product) or Annex III (specific use cases). Article 53 covers GPAI provider obligations; Article 55 adds systemic-risk tier requirements including model evaluations and adversarial testing. Codes of practice and harmonized standards are being developed through 2026. Pre-market conformity assessment for high-risk systems may involve notified bodies. Enforcement is by national competent authorities coordinated by the new EU AI Office. Early cases will test the 10^25 FLOPs threshold and how "substantial modification" triggers re-assessment.

The takeaway for you
If you are a
Researcher
  • ·Four tiers: unacceptable (banned), high, limited, minimal
  • ·GPAI with FLOPs > 10^25 = systemic risk tier
  • ·Fines up to €35M or 7% global revenue
If you are a
Builder
  • ·Most B2B AI tools fall in limited/minimal risk · manageable obligations
  • ·Hiring, credit, medical AI = high risk · significant compliance burden
  • ·Model providers handle GPAI obligations · downstream users inherit some
If you are a
Investor
  • ·Creates significant compliance cost but also a moat for well-funded providers
  • ·EU-native AI labs (Mistral, Aleph Alpha) benefit from home-court advantage
  • ·Enforcement cases in 2026-27 will clarify actual risk
If you are a
Curious · Normie
  • ·Europe's rulebook for AI
  • ·Bans worst uses, regulates risky ones, leaves most alone
  • ·Coming into full force gradually through 2027
Gecko's take

The AI Act is the first real stress test for AI compliance. Expect at least one frontier lab to face enforcement by 2027.

Phased rollout through 2027. Bans on unacceptable-risk AI took effect Feb 2025. GPAI obligations from Aug 2025. High-risk provisions from Aug 2026. Full application Aug 2027.