The Omnibus Might’ve Parked The EU AI Act For Now, But Regulation Is Still En Route

Blog
Digital Transformation Leaders
20 Mar, 2026

The largest tranche of EU AI Act articles was due to take effect on August 2, 2026, covering the remainder of the Act except Article 6(1). This followed the August 2, 2025 wave that activated governance, penalties, notified bodies and the framework for general‑purpose AI (GPAI) models to administer AI use in the EU. However, the European Commission's 'Omnibus VII' simplification package, signed on March 13, 2026, has pushed key deadlines. High‑risk rules move to December 2, 2027, for standalone systems and August 2, 2028, for AI embedded in regulated products. The package further expands micro‑firm reliefs to small and medium enterprises, permits limited processing of sensitive data for bias detection, reinforces AI Office powers and requires providers claiming a “not high‑risk” exemption to register in the EU high‑risk database. For vendors, the practical effect is a limited increase in flexibility and additional time, but not a change in direction.

A lot of the regulatory roadmap remains intact. Once the articles apply, providers of high‑risk systems must complete a third‑party conformity assessment and register in the EU database before market entry for categories such as critical infrastructure, biometrics and education. This will come at an expense: industry group DIGITALEUROPE estimated that certification for one high‑risk system will cost over €200,000, with approximately €100,000 in annual compliance personnel costs. But the risk of non‑compliance is pricier. Fines can reach €15 million or 3% of global turnover for high‑risk breaches, and €35 million or 7% of turnover for use of prohibited AI.

Data quality expectations are also stringent, with training and testing datasets required to be representative and, to the greatest extent possible, free of errors. Buyers are already increasingly demanding data security, with the Verdantix AI corporate survey finding that 65% of respondents cite this as a significant factor slowing AI adoption. Vendors should expect due diligence to scrutinize lineage, controls and testing evidence. Detailed technical documentation and clear instructions for use will be essential for deployer compliance; black‑box approaches will not clear assessments or enterprise procurement.

Further to this, downstream vendors that embed significantly modified AI models through fine‑tuning or distillation will inherit full provider status. Concurrently, ‘deployers’ (enterprise buyers) are legally responsible for ensuring that AI is used according to provider instructions to enable human oversight. Thus, contracts should reflect this allocation of responsibility. Terms of service need joint input from legal, risk and technical teams to mitigate liability from misuse.

Ultimately, vendors should treat the EU AI Act delay as contingency time – the legislation is significant, requiring preparation now. Providers must solidify procurement and delivery processes, as buyer RFIs will probe model robustness, bias testing and cyber security. Regulatory readiness will be a competitive edge; vendors that build trustworthy AI will see fewer obstacles in sales cycles and will be better positioned as the Act’s final deadlines take effect.

To stay up to date with our AI research, including upcoming reports on selecting agent solutions and governance strategies to mitigate AI risk, visit the  AI Applied insights page.

Discover more Digital Transformation Leaders content
See More

About The Author

Aleksander Milligan

Aleksander Milligan

Analyst

View Profile

Related Content