DSET’s non-resident research fellow You-Hao Lai recently published an op-ed in the Global Taiwan Institute (GTI), a Washington-based think tank specializing in Taiwan policy. His article, titled Preventing AI-Enhanced Foreign Interference in U.S. Elections: Lessons from Taiwan’s 4C Strategy, examines Taiwan’s experience in combating AI-driven disinformation from China. Lai calls on U.S. policymakers to learn from Taiwan’s strategies to safeguard American democracy against foreign influence operations.
GTI, Washington’s only think tank dedicated to Taiwan policy, regularly publishes analyses on issues affecting Taiwan and its relationship with the international community. In his article, Lai draws from Taiwan’s electoral experience to illustrate how the Taiwanese government’s “4C Strategy” for managing disinformation can serve as a model for the U.S. and other democracies. He also references DSET’s recent policy report, “Generative AI and Democracy,” which explores the intersection of emerging technology and democratic governance.
Lai emphasizes the growing threat of AI-generated disinformation, authoritarian states such as Iran, Russia, and China weaponize by manipulating public opinion and interfering with democratic elections. Citing research from Varieties of Democracy (V-Dem)—a global research institute focused on measuring democratic resilience—Lai notes that Taiwan has been the most targeted country for disinformation for 11 consecutive years. Despite this, Taiwan remains ranked 10th globally and 1st in Asia for democratic resilience and successfully conducted its 2024 presidential election without significant disruption.
Lai highlights two major AI-powered disinformation campaigns that targeted Taiwan during the 2024 election cycle, illustrating the sophistication of foreign influence operations. The first case involves a YouTube channel named “TrueTJL,” which, three weeks before the election, released an AI-generated voice recording impersonating Lin Chao-Lun, a former Taipei Investigation Bureau officer. The fabricated audio falsely accused then-presidential candidate Lai Ching-Te of being an informant during Taiwan’s martial law era. The video spread rapidly across social media platforms, including PTT (Taiwan’s Reddit equivalent), YouTube, and other political forums, where it was amplified by multiple accounts. Investigations revealed that many of these amplifying accounts were recently created or previously inactive, indicating a coordinated foreign disinformation effort.
The second case occurred on January 2, when an anonymous user uploaded a 318-page document titled “The Secret History of Tsai Ing-wen” to the international open-access platform Zenodo. This document, which contained fabricated allegations about Taiwan’s president, quickly circulated across X (formerly Twitter), Facebook, TikTok, Wikipedia, and Taiwanese content platforms like Matters and Mirror Fiction. In the following days, a wave of newly created YouTube accounts published over 490 AI-generated videos amplifying the false claims in the document. According to findings from the Australian Strategic Policy Institute (ASPI), the document was produced using Chinese-developed software, widely known for its role in Beijing-linked influence operations.
To counter these ongoing threats, Taiwan has developed a comprehensive “4C Strategy” to combat foreign disinformation. Lai outlines the four pillars of this strategy: Cutting production (disrupting the creation of false information), Clarifying falsehoods (quickly debunking misinformation), Curbing dissemination (limiting the spread of disinformation), and Cultivating digital media literacy (educating the public to recognize and resist fake news). This multi-pronged approach, involving cooperation between the government, tech companies, and civil society, has enabled Taiwan to mitigate the impact of AI-generated disinformation. Lai argues that the 4C strategy offers a valuable framework for the U.S. and other democracies seeking to protect their elections from AI-enhanced foreign interference.
However, Lai emphasizes that there is no single solution for countering AI-driven disinformation. Legal penalties against foreign actors are often slow and ineffective, given the cross-border nature of these operations. Fact-checking initiatives, while effective, cannot keep pace with the explosion of AI-generated content. Also, over-reliance on citizen media literacy shifts responsibility from governments and media organizations, placing an unfair burden on the public.
Instead, Lai advocates for a comprehensive strategy that engages government agencies, civil society, and digital infrastructure stakeholders to counter disinformation at every stage—from its creation to its dissemination and public reception. He argues that Taiwan’s 4C Strategy provides a proactive model that the U.S. and other democracies should seriously consider adopting to defend their political systems against AI-enhanced foreign influence operations.