Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The chart presents the share of the reported content which was removed by Facebook, based on data collected by the International Network Against Cyber Hate. The report found that, in 2017, Facebook's monthly removal rate varied widely, reaching a maximum level in August (80%) and a minimum in May (around 40%). Overall, Facebook's removal rate trended slightly upward in 2017.
Facebook
TwitterIn the second quarter of 2025, Facebook took action on 687 million fake accounts, down from one billion in the previous quarter. A record figure of approximately 2.2 billion fake profiles were removed by the social media platform in the first quarter of 2019. Meta considers fake accounts to be those that are created with malicious intent or created to represent a business, organization, or non-human entity. Facebook and inauthentic activity As Facebook is the most used social media platform worldwide, it is not surprising that the service is a target for inauthentic activity and potentially harmful content. Facebook's parent company, Meta, has regulations in place known as Facebook Community Standards that outline what is and is not permitted on the network. Spam content is an ongoing issue that the platform faces, with 1.4 million pieces of spam content being removed in the third quarter of 2022. Facebook’s ongoing popularity The vast majority of internet users have awareness of Facebook as a brand. Almost all social media users in the United States are aware of Facebook, and three-quarters of U.S. social media users have a Facebook account. Furthermore, despite the idea that Facebook is most popular among older generations, its largest U.S. demographic can be found with users aged 25 to 34 years.
Facebook
TwitterThe Indian government made about 92 thousand requests for content removal from Facebook between July and December 2023. This was the highest number of requests recorded since 2013. Most of these were legal process requests, while a small share constituted of emergency disclosure requests. Over 72 percent of the requests made that year were complied with by the platform to some extent.
Facebook
TwitterDuring the second quarter of 2025, Facebook removed three million pieces of hate speech content, down from 3.4 million in the previous quarter. Between April and June 2021, the social network removed a record number of over 31 million pieces of hate speech. Bullying and harassment content is also present on Facebook.
Facebook
TwitterAbstract: In March 2020, shortly after the World Health Organisation declared COVID-19 a global pandemic, Facebook (the company is now rebranded as Meta) announced steps to stop the spread of COVID-19 and vaccine-related misinformation. This entailed identifying and removing false and misleading content that could contribute to “imminent physical harm”. For other types of misinformation the company’s fact-checking network was mobilised and automated moderation systems ramped up to “reduce its distribution”. In this paper we ask how effective this approach has been in stopping the spread of COVID-19 vaccine misinformation in the Australian social media landscape? To address this question we analyse the performance of 18 Australian right-wing and anti-vaccination Facebook pages, posts and commenting sections collected over 2 years until July 2021. We use CrowdTangle’s engagement metrics and time series analysis to map key policy announcements (between Jan 2020 and July 2021) against page performance. This is combined with content analysis of comments parsed from 2 pages, and a selection of posts that continued to overperform during this timeframe. The results showed that the suppression strategy was partially effective, in that the performance of many previously high performing pages showed steady decline between 2019 and 2021. Nonetheless, some pages not only slipped through the net but overperformed, proving this strategy to be light-touch, selective and inconsistent. The content analysis shows that labelling and fact-checking of content and shadowbanning responses were resisted by the user community who employed a range of avoidance tactics to stay engaged on the platform, while also migrating some conversations to less moderated platforms.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The chart presents the share of the reported content which was removed by Facebook, based on data collected by the International Network Against Cyber Hate. The report found that, in 2017, Facebook's monthly removal rate varied widely, reaching a maximum level in August (80%) and a minimum in May (around 40%). Overall, Facebook's removal rate trended slightly upward in 2017.