Abstract:
The EU Artificial Intelligence Act (AI Act) builds a unified regulatory framework for AI systems through a risk-based regulatory structure, the use of sandboxes and a multi-level enforcement mechanism. It is often viewed as a key reference point for global AI governance. As the Act moves into its phased implementation stage, however, several operational pressures have become apparent. The regime’s technical complexity imposes significant demands on the administrative capacity of regulators as well as on the compliance capabilities of stakeholders. The multi-level governance structure between the EU and Member States increases coordination and compliance costs. While phased implementation mitigates abrupt regulatory disruption, it may also lead to uneven enforcement timelines and greater uncertainty in practice. In addition, overlaps between the AI Act and existing EU digital and product regulatory frameworks further complicate regulatory coordination. Against the background of Europe’s comparatively limited competitiveness and market attractiveness in the AI industry, a regulatory framework that is complex and heavily relies on robust enforcement conditions may face constraints in diffusing as a broadly exportable governance template. For China, the value of the EU experience does not lie in replicating the institutional architecture as a whole, but in adapting selected regulatory techniques. Elements such as a risk-based structure, phased implementation, regulatory sandboxes, and the involvement of multiple stakeholders remain instructive for developing responsive governance strategies across sectors with different industrial dynamics and regulatory constraints.