Facebook Slammed for New System to Rate Users' Trustworthiness
Facebook has revealed that it will start rating its users on a hidden trustworthiness scale, as part of what it says is its battle against "fake news."
Facebook product manager Tessa Lyons told The Washington Post in an interview on Tuesday that the ratings system will seek to identify users who attempt to game the system by repeatedly reporting content they disagree with as untrue.
It's "not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they're intentionally trying to target a particular publisher," said Lyons.
Lyons insisted that the trustworthiness score is not meant to be an absolute indicator of a person's credibility, but will serve as a behavioral clue for the Facebook team to monitor those who continuously flag content as problematic.
"For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person's future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true," Lyons explained.
Several experts have spoken out about their unease at this new tool, given that users will have no way of knowing their score or disputing it.
"Not knowing how [Facebook is] judging us is what makes us uncomfortable," said Claire Wardle, director of fact-checking company First Draft. "But the irony is that they can't tell us how they are judging us — because if they do the algorithms that they built will be gamed."
In a follow-up statement to BBC News on Tuesday, the social media giant took issue with the characterization of its new tool as a "reputation" checker, however.
"The idea that we have a centralized 'reputation' score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading," a spokeswoman insisted.
"What we're actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system.
"The reason we do this is to make sure that our fight against misinformation is as effective as possible."
Bernie Hogan from the Oxford Internet Institute said that Facebook's methodology still raises concerns, however.
"Consider the analogy of one's credit score. You can check your credit score for free in many countries — by contrast, Facebook's trustworthiness is unregulated and we have no way to know either what our score is or how to dispute it," Hogan argued.
"Facebook is not a neutral actor and despite any diplomatic press materials to the contrary, it is intent on managing a population for profit."
Ailidh Callander, a solicitor and civil rights campaigner at Privacy International, added that this is "yet another example of Facebook using people's data in ways they would not expect their data to be used, which further undermines people's trust in Facebook."
"Facebook simply must learn some hard lessons, and start being transparent and accountable about how they use people's data to profile and take decisions," Callander added.
When it comes to accusations of bias, conservatives have continuously argued that Facebook is targeting them by shutting down their pages, albeit often times temporarily and with later admissions of error.
In April, Facebook CEO Mark Zuckerberg was grilled during a meeting of the Senate committees on Commerce and Judiciary about what Republican Senator Ted Cruz of Texas called "a pervasive pattern of bias and political censorship."
Zuckerberg admitted at the time that the information technology industry of Silicon Valley "is an extremely left-leaning place," and added:
"This is actually a concern that I have and that I try to root out in the company, is making sure that we do not have any bias in the work that we do, and I think it is a fair concern that people would at least wonder about."