5 datasets found
  1. Trend in Removal Rates on Facebook Based on the Moving Averages of...

    • evidencehub.net
    json
    Updated Feb 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Berecz, Tamás and Charlotte Devinat. The State of Cyber Hate (Brussels: International Network Against Cyber Hate, 2018) (2022). Trend in Removal Rates on Facebook Based on the Moving Averages of Percentage of Removed Cases (2017) [Dataset]. https://evidencehub.net/chart/trend-in-removal-rates-on-facebook-based-on-the-moving-averages-of-percentage-of-removed-cases-2017-6.0
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Feb 11, 2022
    Dataset provided by
    The Lisbon Council
    Authors
    Berecz, Tamás and Charlotte Devinat. The State of Cyber Hate (Brussels: International Network Against Cyber Hate, 2018)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Measurement technique
    Data collection
    Description

    The chart presents the share of the reported content which was removed by Facebook, based on data collected by the International Network Against Cyber Hate. The report found that, in 2017, Facebook's monthly removal rate varied widely, reaching a maximum level in August (80%) and a minimum in May (around 40%). Overall, Facebook's removal rate trended slightly upward in 2017.

  2. Facebook: fake account removal as of Q2 2025

    • statista.com
    Updated Nov 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Facebook: fake account removal as of Q2 2025 [Dataset]. https://www.statista.com/statistics/1013474/facebook-fake-account-removal-quarter/
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    In the second quarter of 2025, Facebook took action on 687 million fake accounts, down from one billion in the previous quarter. A record figure of approximately 2.2 billion fake profiles were removed by the social media platform in the first quarter of 2019. Meta considers fake accounts to be those that are created with malicious intent or created to represent a business, organization, or non-human entity. Facebook and inauthentic activity As Facebook is the most used social media platform worldwide, it is not surprising that the service is a target for inauthentic activity and potentially harmful content. Facebook's parent company, Meta, has regulations in place known as Facebook Community Standards that outline what is and is not permitted on the network. Spam content is an ongoing issue that the platform faces, with 1.4 million pieces of spam content being removed in the third quarter of 2022. Facebook’s ongoing popularity The vast majority of internet users have awareness of Facebook as a brand. Almost all social media users in the United States are aware of Facebook, and three-quarters of U.S. social media users have a Facebook account. Furthermore, despite the idea that Facebook is most popular among older generations, its largest U.S. demographic can be found with users aged 25 to 34 years.

  3. Government requests for content removal from Facebook India H1 2016-H2 2023

    • statista.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista, Government requests for content removal from Facebook India H1 2016-H2 2023 [Dataset]. https://www.statista.com/statistics/1219116/india-requests-for-content-removal-from-facebook/
    Explore at:
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    India
    Description

    The Indian government made about 92 thousand requests for content removal from Facebook between July and December 2023. This was the highest number of requests recorded since 2013. Most of these were legal process requests, while a small share constituted of emergency disclosure requests. Over 72 percent of the requests made that year were complied with by the platform to some extent.

  4. Facebook: hate speech content removal as of 2017-2025

    • statista.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista, Facebook: hate speech content removal as of 2017-2025 [Dataset]. https://www.statista.com/statistics/1013804/facebook-hate-speech-content-deletion-quarter/
    Explore at:
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    During the second quarter of 2025, Facebook removed three million pieces of hate speech content, down from 3.4 million in the previous quarter. Between April and June 2021, the social network removed a record number of over 31 million pieces of hate speech. Bullying and harassment content is also present on Facebook.

  5. d

    Replication Data for: You’ve been shadowbanned: Has Facebook’s strategy to...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Dec 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johns, Amelia; Bailo, Francesco; Booth, Emily; Rizoiu, Marian-Andrei (2023). Replication Data for: You’ve been shadowbanned: Has Facebook’s strategy to suppress rather than remove COVID-19 vaccine misinformation actually slowed the spread? [Dataset]. http://doi.org/10.7910/DVN/A9RNBS
    Explore at:
    Dataset updated
    Dec 16, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Johns, Amelia; Bailo, Francesco; Booth, Emily; Rizoiu, Marian-Andrei
    Description

    Abstract: In March 2020, shortly after the World Health Organisation declared COVID-19 a global pandemic, Facebook (the company is now rebranded as Meta) announced steps to stop the spread of COVID-19 and vaccine-related misinformation. This entailed identifying and removing false and misleading content that could contribute to “imminent physical harm”. For other types of misinformation the company’s fact-checking network was mobilised and automated moderation systems ramped up to “reduce its distribution”. In this paper we ask how effective this approach has been in stopping the spread of COVID-19 vaccine misinformation in the Australian social media landscape? To address this question we analyse the performance of 18 Australian right-wing and anti-vaccination Facebook pages, posts and commenting sections collected over 2 years until July 2021. We use CrowdTangle’s engagement metrics and time series analysis to map key policy announcements (between Jan 2020 and July 2021) against page performance. This is combined with content analysis of comments parsed from 2 pages, and a selection of posts that continued to overperform during this timeframe. The results showed that the suppression strategy was partially effective, in that the performance of many previously high performing pages showed steady decline between 2019 and 2021. Nonetheless, some pages not only slipped through the net but overperformed, proving this strategy to be light-touch, selective and inconsistent. The content analysis shows that labelling and fact-checking of content and shadowbanning responses were resisted by the user community who employed a range of avoidance tactics to stay engaged on the platform, while also migrating some conversations to less moderated platforms.

  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Berecz, Tamás and Charlotte Devinat. The State of Cyber Hate (Brussels: International Network Against Cyber Hate, 2018) (2022). Trend in Removal Rates on Facebook Based on the Moving Averages of Percentage of Removed Cases (2017) [Dataset]. https://evidencehub.net/chart/trend-in-removal-rates-on-facebook-based-on-the-moving-averages-of-percentage-of-removed-cases-2017-6.0
Organization logo

Trend in Removal Rates on Facebook Based on the Moving Averages of Percentage of Removed Cases (2017)

Explore at:
jsonAvailable download formats
Dataset updated
Feb 11, 2022
Dataset provided by
The Lisbon Council
Authors
Berecz, Tamás and Charlotte Devinat. The State of Cyber Hate (Brussels: International Network Against Cyber Hate, 2018)
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Measurement technique
Data collection
Description

The chart presents the share of the reported content which was removed by Facebook, based on data collected by the International Network Against Cyber Hate. The report found that, in 2017, Facebook's monthly removal rate varied widely, reaching a maximum level in August (80%) and a minimum in May (around 40%). Overall, Facebook's removal rate trended slightly upward in 2017.

Search
Clear search
Close search
Google apps
Main menu