Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

On Tuesday, the Menlo Park-based company shared numbers from its first Community Standards Enforcement Report that help illustrate Facebook's performance as of late.

Along with fake accounts, Facebook said in its transparency report that it had removed 21 million pieces of content featuring sex or nudity, 2.5 million pieces of hate speech and nearly 2 million items related to terrorism by Al Qaida and ISIS in the first quarter of 2018. The company also reported that they took down 21 million pieces of adult nudity and 3.5 million pieces of violent content.

By far the most prevalent of the offending categories was spam and fake accounts, and in the first quarter of this year alone Facebook apparently removed 837 million icees of spam and 583 million fake Facebook accounts.

More news: Google Fights Back At Oracle/ACCC Data Investigation

"Today, as we sit here, 99 percent of the ISIS and al-Qaida content that we take down on Facebook, our AI systems flag before any human sees it", Zuckerberg said at the hearing. Facebook said more than 98 percent of the accounts were caught before users reported them. This is in addition to the millions of fake account attempts Facebook said it prevents daily from ever registering with Facebook.

Despite this, the group said fake profiles still make up 3-4 percent of all active accounts.

The posts that keep the Facebook reviewers the busiest are those showing adult nudity or sexual activity - quite apart from child pornography, which is not covered by the report.

More news: East Tennessee officers honor 360 fallen peace officers at national vigil

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, almost triple the 1.2 million during the previous three months. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important", he said. While the company seems to be very proficient at removing nudity and terrorist propaganda, it's lagging behind when it comes to hate speech.

"Artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue", said Rosen. The company says it has 10,000 human moderators helping to remove objectionable content and plans to double that number by the end of the year. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", wrote Rosen.

More news: China's new aircraft carrier begins sea trials near Dalian

Facebook, the world's largest social media firm, has never previously released detailed data about the kinds of posts it takes down for violating its rules.

Related Articles