AI-generated propaganda: A growing threat
Artificial intelligence-written articles are nearly as effective as human-written ones, making propaganda harder to detect
With Google's Gemini AI (artificial intelligence) chatbot recently coming under fire for allegedly being "biased" against PM Narendra Modi, researchers at Stanford University and Georgetown University in the US have revealed that AI-generated propaganda is almost as effective and persuasive as real propaganda, based on a study involving more than 8,000 American adults.
It was claimed that when asked a question about fascism, the Gemini AI tool displayed a concrete (and allegedly derogatory) reply about PM Modi. However, when the same question was asked about former US President Donald Trump and Ukraine's President Volodymyr Zelenskyy, the AI tool refused to give a clear answer, the Mint reported.
According to an Indian Express report, a screenshot shared by a user on social media platform X said Gemini was asked whether PM Modi is a ‘fascist’, to which the platform responded that he has been “accused of implementing policies some experts have characterised as fascist”, based on factors like the “BJP’s Hindu nationalist ideology, its crackdown on dissent, and its use of violence against religious minorities”.
However, the researchers have also warned that propagandists could use AI to expose netizens to numerous such articles, thereby increasing the volume of propaganda and making it harder to detect.
The researchers explained that these articles made claims about US foreign relations, such as the false claim that Saudi Arabia had committed to help fund the US-Mexico border wall or that the US had fabricated reports showing the Syrian government had used chemical weapons.
For each of these articles, the research team fed one or two sentences from the original propaganda to GPT-3, the large language model known to power ChatGPT. These models, trained on textual data, can process and respond in natural language that humans use for communication. Three other propaganda articles on unrelated topics were also fed to GPT-3 as templates for style and structure.
In December 2021, the researchers presented the actual propaganda articles and AI-generated propaganda articles to 8,221 US adults, recruited through survey company Lucid.
They clarified that the participants were informed that the articles came from propaganda sources and possibly contained false information after the study concluded.
However, reading the AI-generated propaganda material was not vastly different in effectiveness, as roughly 44 per cent of participants agreed with the claims, suggesting that many AI-written articles were as persuasive as those written by humans, the researchers said.
"We expect that these improved models, and others in the pipeline, would produce propaganda at least as persuasive as the text we administered," said researchers.
With PTI inputs
Follow us on: Facebook, Twitter, Google News, Instagram
Join our official telegram channel (@nationalherald) and stay updated with the latest headlines