news-details
Detail

We are doing everything to curb misinformation and Fake news: Facebook

Since the 2016 US presidential election, social network sites have acknowledged the issue of fake news as well as their roles in spreading it. Companies like Facebook and Twitter have made efforts to address the problem, instituting a number of measures aimed at stemming the spread of misinformation and disincentivizing those that spread it.

 

“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community," a Facebook spokesperson said in a statement to CNBC. "There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will begin implementing the policy during the coming months.”

 

With over 290 million users in India, which goes to polls in just a few months, the country has one of Facebook’s biggest user bases.

 

So it now wants to explain why certain accounts and posts are deleted, sometimes seemingly at random. In May this year, Facebook published a 27-page document laying down the context on how it determines if a post should be taken down.

 

Sheen Handoo, public policy manager at Facebook, spoke to Quartz about the complexities of determining misinformation, and why fake news continues to be a hard mountain to climb:

 

 

Which important areas do Facebook’s community standard guidelines cover? How do you determine whether a post should be taken down?

 We don’t allow any form of violence on our platform. Among other things, we remove references to dangerous organisations such as terrorist groups. Any reference to or representation of suicide and self-injury is also not allowed. Similarly, we will also remove any objectionable content such as hate speech, nudity, pornography, or graphic violence. Our policies are consistent across regions because we don’t want to muzzle speech from one region and allow it in another.

 

 

Do you depend on technology to identify the nature of content online?

We use a combination of technology and an internal review team to identify violating content on the platform. We have teams in New Delhi, Singapore, Dublin, Washington, and San Francisco. We are trying to ramp up and build teams in other parts of the world also. At the end of 2017, we had 10,000 people working in this team and we want to have at least 20,000 by the end of 2018. We use artificial intelligence (AI) and machine learning tools to enhance human performance.

 

What would you do in a scenario, where a public figure is found violating your content policy?

If a public figure posts such a threat on their profile we will take it down. But mostly, these are part of a news report on what they say. That is fine because it is being used in context for reportage. We have taken down profiles of a few politically prominent people in the past, like Myanmar military accounts.

You can share this post!