Notwithstanding major privacy and security issues during the 2016 presidential election and the problem of fake news on the platform, the report shows that Facebook seems to be doing a fairly good job of utilizing its automated systems and human reviewers to keep the vast majority (often well over 90%) of hate speech, pornography, terrorist propaganda, fake accounts, spam, and graphic violence off its site.
Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday.
Facebook is struggling to catch much of the hateful content posted on its platform because the computer algorithms it uses to track it down still require human assistance to judge context, the company said Tuesday. Its systems spotted almost 100 percent of spam and terrorist propaganda, almost 99 percent of fake accounts and around 96 percent of posts with adult nudity and sexual activity.
The company previously enforced community standards by having users report violations and trained staff then deal with them.
While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.
The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of a year ago. During the first quarter, Facebook found and flagged just 38% of such content before it was reported, by far the lowest of the six content types.
The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook's more than two billion users from January through March.
Facebook took action on 2.5 million pieces of content over hate speech, but doesn't have view numbers as it is still "developing measurement methods for this violation type".
Now, however, artificial intelligence technology does much of that work. It attributed the increase to "improvements in our ability to find violating content using photo-detection technology, which detects both old content and newly posted content". The inaugural report was meant to "help our teams understand what is happening" on the site, he said.
Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the USA presidential election and the Brexit vote to leave the European Union, both in 2016. It estimates that overall, 3-4% of its monthly active users are fake.
It took action on 837 million pieces of spam, though it didn't have view numbers. Those tools worked particularly well for content such as fake accounts and spam: The company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and "nearly 100%" of the spam.
- Khloe Kardashian's tough name decision
- Mother's screams saved five-year-old daughter from bear's clutches
- Hue Jackson to jump into Lake Erie for charity
- Iranian Diplomat, UN Envoy Discuss Syria Peace
- K'taka poll results lifts Sensex above 430 points, Nifty above 10900
- Pilot 'sucked halfway out' of cockpit window mid-flight
- Uber Shifts How it Handles Sex Assault Cases
- Shiite cleric al-Sadr leads in Iraq's initial vote results
- Severe thunderstorm watch is in effect until 10 pm in Douglas County
- Best-Selling Writer Tom Wolfe Dies At Age 88