Facebook’s hidden offensive: What happens when you have to make exceptions to your anti-hate policy

Facebook has helped bring the news to the masses, and now it’s turning to some of its users’ personal data to feed an algorithm. A new, leaked document revealed Thursday by The New York…

Facebook’s hidden offensive: What happens when you have to make exceptions to your anti-hate policy

Facebook has helped bring the news to the masses, and now it’s turning to some of its users’ personal data to feed an algorithm.

A new, leaked document revealed Thursday by The New York Times shows the lengths to which Facebook’s engineers went to try to improve how frequently the social network restored posts deleted due to copyright and nudity concerns.

According to a partial copy of the company’s enforcement manual obtained by the Times, the algorithm Facebook uses to determine which posts to reinstate is built upon Facebook’s own data – it learns from what else users do on the platform when its hands are tied to process deleting content flagged by the company.

“Repurposing a content removal option that people use and often trust” is the methodology Facebook used to improve its campaign to reinstate questionable content, according to the manual. “But with moderation as expensive as it is, we know we’re not going to get to everyone,” it added.

As of this fall, Facebook altered the scheduling of when it quickly took down posts after critics warned the social network was using automated systems to quickly delete accounts and stories without any human input.

The Times, citing the internal manual, reported that the company’s page views grew the most from pages with nudity or copyright-removal features the social network disabled, while content pages posting content like job listings and job announcements were brought down the most.

The existence of the document, which the Times said includes more than 800 pages of pages and posts and amounts to a sweeping blueprint for policing the company’s data, casts a light on what the company is doing to tackle the criticism. It has been grappling with whether Facebook’s massive reach has enabled meddling by foreign governments, such as Russia and China, which has impacted elections around the world, including this year’s midterm elections.

Last month, Facebook removed or temporarily disabled more than 250 pages and accounts tied to Iran that allegedly conducted political influence operations on the network.

During the 2018 midterm elections, Facebook reduced the number of removed posts it deemed fake or likely to be used to manipulate elections from 2.4 million to 1.1 million. This could indicate that the number of posts removed by the social network for misusing the platform actually dropped by a significant amount.

“We’re continuing to work to address the challenges we’ve seen,” Facebook Chief Executive Mark Zuckerberg said during the company’s most recent earnings call on Wednesday. “Our team is working extremely hard and has made tremendous progress in the past 12 months or so. We’ve seen over 150 million pieces of content be fixed and over 29 million people given the chance to help prevent problems by flagging hate speech and so on.”

Leave a Comment