Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

"We have a lot of work still to do to prevent abuse", Facebook Product Management vice president Guy Rosen said. "Accountable to the community".

The company has a policy of removing content that glorifies the suffering of others.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter.

Facebook has published, for the first time, a report with internal statistics about its enforcement efforts to combat violations of its content standards.

Facebook says it will continue to provide updated numbers every six months.

The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called "clickbait" from showing up in users' feeds.

The platform also revealed how much content its automated systems were picking up and how much was reported by users. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.


Utilizing new artificial-intelligence-based technology, Facebook can find and moderate content more rapidly and effectively than the traditional, human counterpart-that is, in terms of detecting fake accounts or spam, at least. Almost all were dealt with before any alert was raised, the company said. For the most part, it has not provided more details on the hiring plan, including how many will be full-time Facebook employees and how many will be contractors. This is a problem perhaps most salient in non-English speaking countries.

Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4. During Q1, Facebook found and flagged 85.6% of such content it took action on before users reported it, up from 71.6% in Q4.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook".

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase. Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight.

Facebook said that for every 10,000 content views, an average of 22 to 27 contained graphic violence, up from 16 to 19 in the previous quarter, a rise that was attributed to the rising volume of graphic content being shared on Facebook. "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context", explains the report, "and decide whether the material violates standards, so we tend to find and flag less of it". The renewed attempt at transparency is a nice start for a company that has come under fire for allowing its social network to host all kinds of offensive content. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

The firm disabled about 583 million fake accounts which were disabled minutes after registering.

Articles Liés