Saturday, 20 January 2018

Facebook to fight fake news by boosting “trustworthy” sites

AdWeek

In a big step to tackle 'fake news', Facebook announced on Friday that it will survey its users in order to identify news sources that are "trustworthy, informative and local." Sites that get positive endorsements will be more likely to appear on users' newsfeeds, while those that are deemed unreliable, uninformative or non-local will appear less. 

The change will be rolled out in the US next week. It comes a year after Facebook launched a five-step programme to crack-down on misinformation. 

Meanwhile, no changes were announced to Facebook's use of algorithms, which will continue to filter news sources for its users to some extent. It also announced no plans to tackle other sources of false or misleading information on the site, such as memes or user-generated content. 

"There's too much sensationalism, misinformation and polarization in the world today," Facebook founder Mark Zuckerberg said. "Social media enables people to spread information faster than ever before, and if we don't specifically tackle these problems, then we end up amplifying them."

Last week, Twitter took steps to combat fake news, informing over 600,000 users that they had been sharing or consuming false information that was generated by Russian bots. 

Facebook's latest development follows an announcement last week that it plans to reduce the overall amount of news that users see on the site. "After this change, we expect news to make up roughly 4% of News Feed -- down from roughly 5% today," Zuckerberg said. 

Giving users the power to decide which news is 'fake' and which is not is an important measure to protect democratic choice, Zuckerberg said. He also argued that it was the "most objective" way to solve the problem and would be better than giving the decision to unaccountable experts or Facebook executives. In the UK, most Facebook users are over the voting age of 18.  

The move will most likely be beneficial to established news outlets with good reputations, such as Reuters and the Economist. According to the Trusted News Project, people tend not to trust partisan sources like Breitbart and Occupy Democrats, so it will probably be bad for them. 

Trusted News Project survey 2017 results. Source: Visual Capitalist
There have always been dangers in allowing users to consciously filter content. The main one is that it could proliferate the "echo chamber" effect, which came into popular parlance during the 2016 US election. 

Yet Zuckerberg claimed that there were plans in the survey initiative to stop this from happening. "The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don't follow them directly," he said. "We eliminate from the sample those who aren't familiar with a source, so the output is a ratio of those who trust the source to those who are familiar with it." 

There is also the possibility that partisan groups could organise in order to discredit sources that are actually reliable, similar to the 'Boaty McBoatface' affair

The term 'fake news' gained popularity during the 2016 election, when then-Republican nominee Donald Trump used it to describe mainstream news services such as CNN and the BBC. The term was then picked up by his detractors, mainly to describe Russian news sites that were allegedly generating false information that favoured Trump. A well-known example is the so-called 'Pizzagate' affair, wherein a false news story claimed that Hillary Clinton was linked to a sex trafficking operation, which led to a man opening fire in a pizza restaurant when he tried to "investigate" the case. 

No comments:

Post a Comment