|

Scale of Facebook’s abuse unveiled

Scale of Facebook’s abuse unveiled

Facebook removed 837 million pieces of spam in the first quarter of 2018, according to a new report released by the social media platform as it continues to appease demands for increased transparency.

Published this week, the Community Standards Enforcement Report covers Facebook’s enforcement efforts from October 2017 to March 2018 in six key areas: spam, fake accounts, hate speech, adult nudity, graphic violence and terrorist propaganda.

“We believe that increased transparency tends to lead to increased accountability and responsibility over time,” said Guy Rosen, VP of product management, Facebook.

“This is the same data we use to measure our progress internally – and you can now see it to judge our progress for yourselves.”

Facebook’s largest enforcement concern is both spam and the fake accounts which circulate it, with 3-4% of active accounts reportedly fake. However, it states that in Q1 alone it had successfully disabled 583 million fake accounts, mostly within minutes of registration.
[advert position=”left”]
21 million pieces of adult nudity and sexual activity have also been removed during the first three months of the year, as well as 3.5 million pieces of violent content.

However, the report also admits the current limitations of AI in the enforcement process. Although it could accurately detect and remove spam and adult nudity content nearly 100% of the time, the technology does not yet work well for tackling hate speech – just 38% of the 2.5 million pieces of hate speech removed in Q1 were detected by AI.

The data reflects comments by CEO Mark Zuckerberg during his appearance in front of US congress last month, where he stated that it could take up to 10 years to develop tech sophisticated enough to properly combat Facebook’s content problems.

According to Rosen, Facebook’s AI is up against “sophisticated adversaries” – but the business is “investing heavily in more people and better technology to make Facebook safer for everyone”.

7,500 people currently moderate Facebook content globally.

The release of the report is yet another addition to the list of announcements Facebook has made in response to the bad press it has suffered since the Cambridge Analytica scandal earlier this year.

The month so far has already seen Facebook announce plans to develop a new Clear History tool, as well as an update on their investigation into apps potentially misusing user data.

Media Jobs