A startup with 400 employees building an AI-powered hiring tool in Berlin just got two extra years before the European Union’s strictest compliance rules apply to its product. On May 7, 2026, negotiators from the EU Council presidency and the European Parliament reached a provisional political agreement that delays the deadline for high-risk AI system obligations to December 2, 2027, extends lighter compliance treatment to thousands of additional companies, and introduces an explicit ban on AI-generated intimate imagery without consent.
The deal is part of the broader Omnibus VII simplification package and represents the most significant revision to the AI Act since it was adopted. It does not gut the law’s risk-based framework. Instead, it recalibrates the timeline and paperwork so that smaller firms are not crushed by rules whose technical benchmarks have not even been finalized yet.
Delayed Deadlines and Who Benefits
Under the original AI Act, developers and deployers of high-risk AI systems, those used in biometric identification, critical infrastructure management, employment screening, and similar sensitive domains, faced the heaviest compliance load: technical documentation, conformity assessments, post-market monitoring, and registration in an EU-wide database.
The provisional agreement pushes back those obligations by roughly two years. According to the Council’s press communication, high-risk AI systems integrated into regulated products like medical devices or industrial machinery get an even longer runway, with their deadline extended to August 2, 2028. The reasoning is straightforward: companies cannot comply with technical requirements when the harmonized standards defining those requirements are still being written.
The agreement also expands who qualifies for reduced compliance burdens. The European Commission’s original legislative proposal (COM(2025) 836 final) had already proposed extending simplifications beyond small and medium-sized enterprises to include small mid-cap firms, companies that previously fell outside the SME threshold. Both the Council’s negotiating mandate and the Parliament’s pre-trilogue position endorsed that extension, and the final agreement carries it through. For a startup with 400 employees that previously fell outside the SME threshold, this means access to lighter documentation and registration requirements that could cut weeks off a product launch cycle.
The Nudification Ban
Tucked into the same package is a provision that addresses one of the fastest-growing forms of AI-enabled abuse: non-consensual intimate imagery. The agreement adds an explicit prohibition on AI systems that generate or manipulate sexual images of real people without their consent, commonly known as “nudification” apps.
The European Commission confirmed the ban as part of the compromise, framing it as a targeted safeguard that does not restrict legitimate generative AI applications. However, the published details stop there. The final text has not been released, and critical enforcement questions remain open: whether the prohibition targets app developers, hosting providers, app store operators, or end users; what penalties apply; and how the ban will interact with existing obligations under the Digital Services Act and EU data protection law, particularly when content is generated outside the bloc but accessed within it.
What the Council’s Negotiating Mandate Reveals
The Council’s negotiating mandate, adopted earlier in 2026, offers the most granular public record of what was on the table. It reframed the AI literacy obligation so that companies must ensure practical understanding of AI systems among their staff rather than meeting formal training requirements. It modified definitions around SMEs and small mid-caps to align with the broader eligibility expansion. And it added a requirement for the Commission to issue guidance that minimizes compliance overlap between AI Act obligations and existing sectoral product safety directives.
That last point addresses a persistent frustration among manufacturers: the risk that AI-specific rules and pre-existing product safety legislation create duplicative or contradictory requirements. A medical device maker already subject to the EU’s Medical Device Regulation, for example, could face two parallel conformity assessment processes for the same AI component. The agreement signals an intent to streamline that overlap, though the specifics will depend on Commission guidance that has not yet been published.
The legislative file, tracked as procedure 2025/0359(COD), also includes provisions for post-market monitoring flexibility, reduced registration burdens, and expanded oversight by the EU’s AI Office. All three institutions describe a package that preserves the AI Act’s core architecture while smoothing its rougher edges for the businesses that must implement it.
What Companies Still Do Not Know
The agreement is provisional. No consolidated legal text has been published, and the details available come from institutional press releases and pre-trilogue positions rather than from the final regulation itself. Several practical gaps will matter to companies planning compliance strategies right now.
First, the agreement ties high-risk compliance timelines to the readiness of harmonized standards and compliance tools, but no public schedule exists for when those standards will be finalized. If they lag behind the December 2027 deadline, it is unclear whether the timeline shifts again or whether companies face a compliance cliff with incomplete guidance.
Second, budget details for SME and small mid-cap support measures remain opaque. The Parliament referenced support programs, advisory services, and regulatory sandboxes in its position, but no quantified funding allocations have appeared in publicly available documents. Without clear figures, it is impossible to assess whether promised assistance will reach the thousands of companies now brought under the simplified regime.
Third, the AI Office’s expanded role in post-market monitoring has been described only in general terms. No operational plan or impact assessment has been released explaining how that flexibility will work in practice or how it will interact with national market surveillance authorities across the 27 member states.
Where This Leaves the AI Act’s Compliance Timeline
The provisional agreement does not rewrite the AI Act so much as adjust its implementation schedule and widen the on-ramp for smaller companies. The risk-based framework remains intact. Prohibited AI practices, now including nudification, stay banned. High-risk systems still face conformity assessments, technical documentation, and post-market monitoring. What changes is when those obligations bite and how much paperwork the smallest players must produce.
For the agreement to become law, both the Council and Parliament must formally adopt the final text, which then needs publication in the Official Journal of the European Union. Until that happens, companies should treat the current descriptions as a reliable outline of political intent rather than a finished compliance roadmap. The consolidated regulation, along with implementing guidance from the Commission, the AI Office, and national regulators, will determine exactly how much breathing room the new timelines and simplifications actually provide.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.