Skip to main content

China's use of AI to create propaganda in elections

April 8, 2024

Microsoft has issued a warning about how China is using AI to create propaganda in the US, South Korea, and India. The notion is that it’s now possible to use generative AI tools to scale what people have been doing for years characterize events that are happening in the world with a particular political slant to shift people's thinking. We’ve already seen this with Russian propaganda related to its invasion of Ukraine. Now we’re seeing more targeted propaganda that is specifically tailored to persuade your identified point of view. That’s what’s scary. 

We've entered an era that is well-defined by "The Library of Babel," a short story by Jorge Luis Borges, which is about a library that has every book that could possibly be written. Some of them are true. Some of them are false. And somewhere, there's an index that tells you where all the true books are. But right next to it is another index that wrongly tells you where all the “true” books are, but they are actually false. In our current era, this is happening so fast that being able to regulate against it before it has an impact is going to be incredibly difficult. 

We’re in a place where it becomes incumbent upon us, the people who are ingesting information, to take a more critical eye to what we see online. We must become skeptics and use our critical thinking skills. That won't work for everybody because some people don't want to have a critical eye. But we need to be aware of the things that we're looking at and the things that we're reading. A good rule of thumb is, if it looks too good to be true, it’s probably not true. For example, if you see an appealing and short snippet of information about someone you dislike politically, it’s likely that it was crafted to appeal to you so that you would share it. That’s a dangerous piece of content.  

We certainly want the social media companies to pay attention to where their content is coming from and try to get rid of bot-based content. But the reality is that well-directed content is going to be as powerful a dissemination mechanism as having hundreds of thousands of bots. It could be that we'll enter an era of not trusting anything that's online not just text, but also video that looks professional and credible. It's a difficult world, and we need to think about how we might regulate against it without impinging upon freedom of speech. 

Kristian HammondKristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top