The democratic process worldwide is facing a new threat: the use of AI to create and disseminate disinformation campaigns. This concern comes courtesy of a recent report by Microsoft’s Threat Intelligence team, which specifically highlights China’s alleged involvement in using AI-generated content to influence the Taiwan presidential election in January, and further raises concerns about the potential for similar tactics to be deployed in upcoming high-profile elections in India, South Korea, and the US.
Microsoft’s report sheds light on China’s activities during the Taiwan presidential election, suggesting a potential blueprint for future AI-driven disinformation campaigns. A Beijing-backed group known as Storm 1376, also known by aliases like Spamouflage and Dragonbridge, allegedly played a central role in this operation. Storm 1376 is accused of uploading a fake audio recording on YouTube featuring a candidate who had previously withdrawn from the race, endorsing another candidate. Microsoft believes this audio was likely generated using AI, highlighting the ability to create entirely fabricated narratives that could sway voters.
“China will create and amplify AI-generated content to benefit its interests. Despite the chances of such content affecting election results remaining low, China’s increasing experimentation in augmenting memes, videos, and audio will likely continue, and may prove more effective down the line,” the report said.
Furthermore, the group allegedly circulated AI-generated memes targeting specific candidates, particularly the ultimately victorious William Lai. These memes, designed to spread quickly on social media platforms, likely contained unsubstantiated accusations and aimed to damage Lai’s reputation and sway voters’ opinions. Mimicking a tactic employed by Iran, Storm 1376 reportedly used AI-generated news anchors to deliver fabricated stories about Lai’s personal life. These anchors, created using tools like ByteDance’s CapCut (the developer behind TikTok), added a layer of seeming legitimacy to the disinformation campaign.
“Storm-1376 has promoted a series of AI-generated memes of Taiwan’s then-Democratic Progressive Party (DPP) presidential candidate William Lai, and other Taiwanese officials as well as Chinese dissidents around the world. These have included an increasing use of AI-generated TV news anchors that Storm-1376 has deployed since at least February 2023,” Microsoft said.
Microsoft warns that China’s experimentation with AI-generated content for election interference is likely to continue and potentially become more effective. The report anticipates Chinese state-backed cyber groups, possibly collaborating with North Korea, to target the upcoming elections in India (Lok Sabha elections), South Korea, and the US as well. The concerns are valid, given that elections are scheduled in numerous countries this year, including India’s Lok Sabha polls, and China may further create and amplify AI-generated content via social media platforms to advance its geopolitical interests during these times. For those who are unaware, India’s seven-phase Lok Sabha election will commence from next Friday, from April 19. and will continue till June 1. The results of the poll will be declared on June 4.
Microsoft’s fears are not unfounded, especially when AI possesses the potential to influence electoral outcomes. Malicious actors, including state-sponsored entities and cybercriminals, can leverage AI (especially generative AI) to create and disseminate deceptive content, such as deepfakes and fabricated news stories, with the aim of manipulating voter perceptions and behavior. By exploiting vulnerabilities in digital platforms and social media networks, perpetrators can disseminate false information at an unprecedented scale as well.