top of page

EconoScope | China outlines AI governance framework balancing development and security

Updated: 1 day ago

【CMHnews】Geoffrey Hinton, widely regarded as the godfather of artificial intelligence, warned at the 2025 World Artificial Intelligence Conference in Shanghai that AI could become dangerous, comparing humans' relationship with the technology to keeping a tiger cub as a pet that might turn deadly when fully grown.

 

    AI development is advancing at an unpredictable pace, while governance mechanisms and international consensus lag behind. In response, China is focusing not only on developing AI but also on ensuring its safe advancement and promoting cooperation amid global competition.

 

Humanoid robots perform in a mixed martial arts exhibition match in Daxing District, Beijing, on Aug. 11, 2025. (Photo/China News Service)
Humanoid robots perform in a mixed martial arts exhibition match in Daxing District, Beijing, on Aug. 11, 2025. (Photo/China News Service)

    China's AI strategy rests on three key priorities. The first is to ensure the healthy and orderly development of AI.

 

    Shen Weixing, dean of Tsinghua University's School of Law, said at the China Internet Conference that governance should not stifle industrial growth, but AI must follow clear rules to protect development and public interests. A recent State Council meeting called for fully implementing the "AI+" initiative, accelerating large-scale AI applications, and promoting deeper integration across sectors.

 

    The second priority is building a diversified, collaborative governance framework. Governance aims to prevent risks while unleashing technology's full potential within a controllable scope.

 

    The State Council called for enhanced security capabilities and a multi-stakeholder AI governance system.

 

    Liu Dian, a researcher at Fudan University, said China should reject "small yard, high fence" policies and promote practical AI cooperation through initiatives like the Belt and Road.

 

    Yao Jia, a law professor at the Chinese Academy of Social Sciences, added that an inclusive and diverse framework should guide future international collaboration.

 

    The third priority is applying advanced technology to address AI's new risks.

 

    Traditional, fragmented security measures are no longer sufficient. At WAIC 2025, Hinton urged humanity to explore general AI training methods to guide AI toward benevolence.

 

    Zhou Bowen, director of the Shanghai AI Laboratory, said managing agents that may surpass human intelligence requires "intrinsic safety," shifting the focus from making AI safe to making safe AI.

 

    China's approach integrates development, governance, and technological safety, aiming to balance innovation with regulatory oversight in an era of global competition.

 

By Gong Weiwei

Reposting from ECNS.Cn

Comments


bottom of page