
Following the release of China’s open-weight model DeepSeek, democracies have grown increasingly concerned about China’s advancement in artificial intelligence. Kai-Shen Huang, Director of the Democratic Governance Program at DSET, recently published an op-ed in The Diplomat titled What Democracies Get Wrong About Chinese AI, urging democratic nations to strengthen their inbound governance capabilities over Chinese AI systems. He calls for institutional resilience that can review the origin of AI systems, oversee their applications, and reject high-risk systems when necessary.
Huang argues that DeepSeek not only demonstrates China’s progress in language model technology, but also reflects its growing ability to maintain a commercially viable AI ecosystem capable of offering globally competitive services in terms of performance and price. In contrast, many democracies remain overly focused on keeping pace with China in model development, while overlooking a more urgent challenge: the absence of regulatory frameworks to address the entry of Chinese AI systems into their domestic markets.
The article points out that current international AI policy primarily focuses on preventing domestic technologies from being exported, such as through restrictions on outbound investment and technology transfer, but lacks awareness and regulation regarding the “import” of Chinese models. Chinese AI tools often enter democracies via open-source releases, enterprise partnerships, or third-party resellers. Some open-weight models from China may even be renamed and distributed by local vendors, making their origins difficult to trace and excluding them from existing policy mechanisms. Even the EU AI Act, despite being a major legislative milestone, has yet to explicitly address such geopolitical risks.
The article concludes that if democracies are to effectively respond to the global expansion of Chinese AI, they must shift focus from simply catching up in technology to enhancing data governance, institutional resilience, and cross-border risk assessment. Establishing a sustainable regulatory framework, including source-of-origin review, application risk oversight, and deployment standards, will help safeguard democratic systems’ stability and value alignment amid AI’s rapid evolution.