This article highlights 10 key takeaways for US companies regarding the EU AI Act, which imposes a comprehensive regulatory framework for AI systems. With its extraterritorial scope, the EU AI Act may impact US companies placing AI systems in the EU market or whose AI outputs are used within the EU.
Structurally, the EU AI Act prohibits certain AI systems deemed to carry "unacceptable risk," such as government social scoring and real-time biometric identification. Meanwhile, high-risk AI systems face strict compliance requirements, including risk management, record keeping, and data governance.
For companies developing or deploying general-purpose AI models (i.e., foundation models), like large language models, additional rules may apply, particularly when so-called systemic risk is involved.
Overall, the article emphasizes the importance of aligning US AI governance strategies with the EU AI Act as similar regulations in the US and globally are drawing from the EU AI Act’s risk-based framework. A rolling enforcement schedule begins in 2025, giving US companies time to adjust their AI compliance programs and practices.