Twitter admits its algorithm disproportionately amplifies right wing political content
Twitter’s own research has found that its algorithm disproportionately amplifies right wing political content and news sources in an analysis involving millions of its users across seven countries
Twitter’s own research has found that its algorithm disproportionately amplifies right wing political content and news sources in an analysis involving millions of its users across seven countries. The social media company’s researchers are unsure why that is happening. India was not among the seven countries studied.
The social media platform examined tweets from elected officials in seven countries – the UK, US, Canada, France, Germany, Spain and Japan. According to a 27-page document, Twitter found a “statistically significant difference favouring the political right wing” in all the countries except Germany. Under the research, a value of 0% meant tweets reached the same number of users on the algorithm-tailored timeline as on its chronological counterpart, whereas a value of 100% meant tweets achieved double the reach.
Twitter’s new study co-authored by six researchers and released by the company this week, found that its algorithm amplifies right wing political narratives. “Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favours right-leaning news sources,” said the study, involving academics from University of Cambridge, UC Berkley and University College London.
The study looked at millions of tweets from politicians across seven countries and hundreds of millions of tweets of links from news outlets, not tweets from the outlets themselves.
In a series of tweets detailing the findings, Twitter’s director of software engineering Rumman Chowdhury, said “Twitter is a sociotechnical system — our algos are responsive to what’s happening. What’s next is a root cause analysis — is this unintended model bias? Or is this a function of what and how people tweet, and things that are happening in the world? Or both?” the Twitter staffer added in another tweet.
A blog post (https://twitter.com/ruchowdh/status/1451248122605146112?s=20) by Rumman Chowdhury, Twitter’s director of software engineering, and Luca Belli, a Twitter researcher, said the findings could be “problematic” and that more study needed to be done. The post acknowledged that it was problematic if certain kind of tweets received preferential treatment as a result of the way in which users interacted with the algorithm tailoring their timeline.
“Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it. Further root cause analysis is required in order to determine what, if any, changes are required to reduce adverse impacts by our Home timeline algorithm,” the post said.
The company further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones: contrary to prevailing public belief, they did not find evidence to support this hypothesis. Twitter used definitions from other researchers outside the company and didn't define for itself what news outlets and politicians are "right-leaning" or belong to right-wing political parties.
Chowdhury added that tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline.
Twitter doesn't already know what causes certain content to be amplified. "When algorithms get put out into the world, what happens when people interact with it, we can't model for that. We can't model for how individuals or groups of people will use Twitter, what will happen in the world in a way that will impact how people use Twitter," emphasised Chowdhury. Twitter’s algorithms cannot be opened up and examined for biases, and the home feed isn't run by just one algorithm. It's a system that works together, creating “system-level complexity”.
Twitter's META team – made up of Rumman Chowdhury, Luca Belli, a machine learning engineer at Twitter; Conor O’Brien, research engineer at Twitter; Jutta Williams, Twitter product manager; Sofia Andrew Schlaikjer, engineer formerly with Twitter and currently with DeepMind; Kristian Lum, a University of Pennsylvania professor; Sarah Roberts, co-director of the Center for Critical Internet Inquiry at UCLA; Ferenc Huszar, associate professor in machine learning at Cambridge; Moritz Hardt, Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley – promised to share its research with the public as one of Twitter's 2021 priorities.
In an interview to Hindustan Times, Huszar said the patterns observed could be a result of several contributing factors. Differences may arise, for example, from people with different political interests simply using Twitter differently: some communities might use the retweet, like or reply functions more, or attach a slightly different relevance to each of these actions
Experts have recently called on social media companies such as Twitter and Facebook to share access with academic researchers to get to the bottom of how much harm their technologies may be causing. These concerns stemmed from a series of controversies that have involved most prominently, Facebook.
Follow us on: Facebook, Twitter, Google News, Instagram
Join our official telegram channel (@nationalherald) and stay updated with the latest headlines