EU Delays High-Risk AI Regulation to August 2028

Key Points
- 1EU extends regulation deadline for high-risk AI systems by 16 months
- 2Establishes standards for accountability and safety in AI deployment
- 3Regulatory changes aim to reduce dependency on foreign technology
- 4EU extends regulation deadline for high-risk AI systems by 16 months • Establishes standards for accountability and safety in AI deployment • Regulatory changes aim to reduce dependency on foreign technology
The European Council has approved an extension of the deadline for regulating high-risk AI systems by 16 months, moving the target date to August 2, 2028. This adjustment is part of the Omnibus AI Act, which aims to simplify existing laws while developing necessary standards and tools for the oversight of high-risk AI applications, such as facial recognition and AI in critical infrastructure. As part of this legislative package, new measures will also prohibit the generation of explicit content involving minors and non-consensual acts.
The strategic implications of this regulatory delay are significant, as it highlights the EU's commitment to developing comprehensive frameworks for AI safety and accountability. By establishing a longer timeline, the EU may reduce dependency on existing foreign AI solutions while encouraging national governance of AI technologies. This move could foster an environment for in-house technological advancements, ultimately strengthening data sovereignty and promoting the EU's leadership in the global AI landscape.
Free Daily Briefing
Top AI intelligence stories delivered each morning.