Last November we heard a lot about ‘fake news’ during US elections. Twitter Bots could have played a role, and it appears both political parties used Bots but one party won this bot war.
To begin with what is a bot? A bot automates many tasks for you – for e.g. picks a tweet, understands the context, automates the response. The algorithm of the bot has to be very good, especially the self learning part of it.
Few highlights from the USC report about Twitter bots being used during elections are listed below,
- Fake accounts flood Twitter.
- Twitter has become for many the main source of political news (hence it is best to target Twitter to sway the voters)
- 20 million election-related tweets were analysed for this study.
- Using the “Bot or Not” algorithm nearly 3.8 million of the 20 million tweets (20%) were identified to have been generated by bots.
- Of the 2.8 million Twitter handles that were tweeting politically approx 400,000 (14%) were identified to be bots.
- “Political tweets” were those classified as matching 23 different political keywords or hashtags, including #NeverHillary and #NeverTrump.
- Both human and bot tweets about Trump were almost uniformly positive when compared to those about Clinton.
- Bots on social influenced what people were learning about the candidates, particularly Clinton. (source Oxford).
- During the Presidential debates pro-Trump bots generated 7 tweets for every 1 pro-Clinton bot, clearly Trump had a significantly higher number of bot supporters.
- Where as in Germany during the elections the ratio of professional news to junk news shared by German Twitter users was 4 to 1 (source Oxford).
Validating the information that has been posted has become the biggest challenge for Facebook and Twitter. The main aim of Twitter bots is not just to influence the reader but also hoping Main Stream Media (print, TV) picks up the (mis)information.
My Take On Misinformation On Social Media
It is important for the social media platforms to weed out misinformation. Relying on crowdsourcing to identify fake news / bots / misinformation is not a scalable model.
We have seen during the General Elections 2014 in India the tweets were very polarized, nothing wrong in taking sides but the spread of misinformation was a big problem. Even well read individuals at times cannot differentiate between real, tweets by bots and fake news.
I am not sure if Government policies will help in controlling misinformation. Or will hefty fines force social networks to take this problem seriously?
For now if influential and trusted users can validate few important pieces of news that goes on these social networks it could help, but this surely is not a scalable model.
Established media houses publish news without checking the facts with the govt or the related political party. This I believe is the biggest danger we face today. Before the media house apologies for the error (if at all) the damage would have been done. Possibly hefty fines from govt for spreading false news will be a deterrent.
- Russian propaganda generated 39,000 signatures (mostly from bots) demanding Alaska be given back to Russia
- Political Bots – Project at Oxford on Algorithms, Computational Propaganda, and Digital Politics
- Dealing with Fake News
- Fact Check now available in Google Search and News around the world
- An update from Facebook on dealing with misinformation on their platform.