... and justice for AIl — Martina Guttau-Zielke

Discover how the EU AI Act impacts developers and businesses through risk-based regulations, compliance requirements, and enforcement measures for ethical AI development.

Key takeaways
  • The EU AI Act is pioneering legislation that aims to create a balance between innovation and risk while protecting fundamental rights of citizens

  • AI systems are categorized into risk levels:

    • Prohibited AI practices (manipulative systems, social scoring)
    • High-risk AI (healthcare, critical infrastructure, law enforcement)
    • Limited risk/transparency requirements
    • Minimal risk (basic AI applications)
  • Key obligations for high-risk AI providers:

    • Technical documentation
    • Risk management systems
    • Data governance
    • Human oversight
    • CE certification requirements
    • Regular audits
  • Maximum fines of €35M or 7% of global annual turnover for violations, with lower fines for startups/SMEs

  • Two-year transition period:

    • 6 months for prohibited practices
    • 1 year for GPAI system compliance
    • 2 years for high-risk systems
  • National supervisory authorities will be established in each EU member state to enforce compliance

  • Sandbox environments will be provided for testing AI systems under realistic conditions

  • Act applies to both EU and non-EU companies wanting to operate in the EU market

  • Military/defense applications are largely exempt from the regulations

  • Concerns exist around enforcement capabilities and potential loopholes due to fuzzy definitions

  • The legislation will likely evolve through additional bylaws and court rulings over time