Reuters :
India’s election was in full swing when hundreds of social media users shared a video that appeared to show Home Minister Amit Shah saying the ruling party wanted to scrap a quota system aimed at undoing centuries of caste discrimination.
The controversial comments caused a brief furore before fact-checkers stepped in and declared the video a fake that had been made using old footage that was doctored with the help of basic editing tools – a so-called cheapfake.
In the run-up to the ongoing election, the results of which are due on June 4, politicians and digital rights groups voiced concern that voters could be swayed by misinformation contained in AI-driven “deepfake” videos.
But fact-checkers say most the falsified pictures and videos posted online during the six-week election have not been made using artificial intelligence (AI), instead using relatively cheap and simple techniques such as footage editing or mislabelling to present content in a misleading context.
“Maybe 1% of the content we have seen is AI-generated,” said Kiran Garimella, an assistant professor at Rutgers University who researches WhatsApp in India. “From what we can tell, it’s still a very small percentage of misinformation.”
Whether cheapfakes or deepfakes, the result can be equally convincing, fact-checkers say, putting the onus on social media companies to do more to root out all forms of misinformation being spread on their platforms.
“You can resurrect dead leaders using AI but people realize its propaganda… However, if you mislabel a video or clip it out of context, people are more likely to believe it,” said Pratik Sinha from Alt News, an Indian nonprofit fact-checking website.
“Rather than getting into the binary of deepfakes and cheapfakes, there is a need for finding a way to tackle misinformation more effectively,” Sinha told the Thomson Reuters Foundation.
Both Meta Platforms Inc, which owns Facebook and Instagram, and X, introduced new policies to crack down on different forms of misinformation in a big year for global elections, but fact-checking groups say the results have been disappointing.
Responding to criticism from its oversight board, Meta updated its guidelines in April to add prominent labels to all forms of misinformation. Meta’s earlier policy only applied to content altered or created using AI.