Automated accounts or "bots" played a "disproportionate" role in spreading misinformation online during the 2016 US presidential polls, revealed an analysis of 14 million messages and four lakh articles shared on Twitter.
Researchers from the Indiana University in the US identified a mere six per cent of Twitter accounts that acted as bots and were enough to spread 31 per cent of the "low credibility" information on the network.
These accounts were also responsible for 34 per cent of all articles shared from "low credibility" sources.
"This study finds that bots significantly contribute to the spread of misinformation online as well as shows how quickly these messages can spread, " said lead author Filippo Menczer, Professor at the varsity.
The analysis, published in the journal Nature Communications, also revealed that bots amplify a message's volume and visibility until it's more likely to be shared broadly, despite only representing a small fraction of the accounts that spread viral messages.
"People tend to put greater trust in messages that appear to originate from many people, " added co - author Giovanni Luca Ciampaglia, an assistant research scientist from the varsity.
"Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them, " he noted.
Other tactics for spreading misinformation included amplifying a single tweet - - potentially controlled by a human operator - - across hundreds of automated retweets; repeating links in recurring posts; and targeting highly influential accounts.
The team also ran an experiment inside a simulated version of Twitter and found that the deletion of 10 per cent of the accounts - - appearing to be bots - - resulted in a major drop in the number of stories from low credibility sources in the network.
"This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks, " Menczer said.
Although their analysis focused on Twitter, the researchers stressed that other social networks such as Snapchat and WhatsApp are also vulnerable to manipulation.
To combat misinformation, companies should improve algorithms to automatically detect bots and add a "human in the loop" to reduce automated messages in the system.
For example, users might be required to complete a 'Captcha' to send a message, they suggested.