Facebook has had community standards and a dedicated safety team for years now, but has previously been reluctant to share exactly how guidelines are enforced and how much content is actually removed.
Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase.
Rosen continued to reveal that Facebook removed 583 million fake accounts between January and March of this year.
Zuckerberg noted that there is still room for improvement with Facebook's AI tools - noticeably flagging hate-speech content. Hate speech is hard to flag using AI because it "often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards", according to the report.
The content audited included graphic violence, hate speech, adult nudity and sexual activity, spam, terrorist propaganda (ISIS, al-Qaeda and affiliates), and fake accounts.
The problem is that, as Facebook's VP of product management Guy Rosen wrote in the blog post announcing today's report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content.
After admitting that it let Cambridge Analytica use its network to grab unwitting users' data, Facebook has been on thin ice with both consumers and government officials.
Oil prices climb amid geopolitical concerns
Chinese refineries processed 12.13 million bpd in March, beating the previous record of 12.03 million bpd from November 2017. This is a sign that bulls are not trading with as much conviction as we saw following Trump's Iran decision.
Vice-President of Data Analytics Alex Schultz told the Guardian that Facebook just needs to figure out how to categorize those things for the report.
Similarly, the company has removed 837 million pieces of spam.
Getting rid of racist, sexist and other hateful remarks on Facebook is more challenging than weeding out other types of unacceptable posts because computer programs still stumble over the nuances of human language, the company revealed Tuesday.
The company said most of the increase was the result of improvements in detection technology. For instance, Alex Schultz, the company's vice-president of data analytics, said the amount of content moderated for graphic violence nearly tripled quarter-on-quarter.
Facebook said it was able to remove so many accounts and posts that violated its community standards thanks to its AI tools.
"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook". Facebook also stops millions of fake accounts from even signing up for its service each day. However, when it came to hate speech, the company's technology only flagged around 38 percent of posts that it took action on and Facebook notes it has more work to do there. Cleaning up the social network is an important factor in the calculus of Facebook's ad-heavy business moving forward.
The social media website claimed to have disabled 583 million fake accounts. The company didn't provide a number of views, but said it was "extremely low".