2 datasets found
  1. The Number of Videos Removed by YouTube, by Source of First Detection

    • evidencehub.net
    json
    Updated Jun 14, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google. Transparency Report: YouTube Community Guidelines Enforcement (www.google.com, 2022) (2022). The Number of Videos Removed by YouTube, by Source of First Detection [Dataset]. https://evidencehub.net/chart/the-number-of-videos-removed-by-youtube-by-source-of-first-detection-279.0
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jun 14, 2022
    Dataset provided by
    The Lisbon Council
    Authors
    Google. Transparency Report: YouTube Community Guidelines Enforcement (www.google.com, 2022)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    YouTube
    Measurement technique
    Self-reporting
    Description

    The chart number of videos removed by YouTube for the period October 2017-March 2022, by first source of detection (automated flagging or human detection). Flags from human detection can come from a user or a member of YouTube’s Trusted Flagger program,which include individuals, NGOs, and government agencies. The chart shows that the number of automated flagging is significantly higher compared to human detection. When it comes to human detection, the biggest number of removed videos were first noticed by users, followed by individual trusted flaggers, NGOs and government agencies.

  2. d

    Removal and Enforcement Actions by Social Media Companies: Year and Month...

    • dataful.in
    Updated Feb 13, 2026
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataful (Factly) (2026). Removal and Enforcement Actions by Social Media Companies: Year and Month wise Number of Content Removed and Accounts Banned/Suspended by SSMIs and Violation Category [Dataset]. https://dataful.in/datasets/18652
    Explore at:
    csv, application/x-parquet, xlsxAvailable download formats
    Dataset updated
    Feb 13, 2026
    Dataset authored and provided by
    Dataful (Factly)
    License

    https://dataful.in/terms-and-conditionshttps://dataful.in/terms-and-conditions

    Area covered
    India
    Variables measured
    Social Media Intermediaries Ban actions
    Description

    High Frequency Indicator: This dataset presents year and month wise enforcement actions taken by Significant Social Media Intermediaries (SSMIs) from 2021 to the present, compiled from the mandatory monthly transparency reports published under Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. It includes counts of content removed, accounts suspended or banned, and chatrooms, comments, edit profiles and livestreams restricted, along with the policy or violation category (e.g., child sexual exploitation, terrorism, hate speech, bullying, violence, regulated goods, misinformation, etc.).

    To enable comparability across platforms with different reporting terms, the dataset uses a standardised enforcement classification:

    1. enforcement_type:

    The type of action taken: a. Content Actioned (any enforcement such as warning, downranking, age-gating), b. Content Removed (content deleted or made inaccessible), c. Account Banned (account suspension or disabling), d. Quality Metric (AI moderation accuracy indicators reported by some platforms).

    1. proactive_flag:

    Whether the platform identified and enforced before user reports: a. Proactive = Found via automated detection or internal review systems, b. Unknown = Platform did not specify proactive vs reactive.

    Notes: 1. SSMI denotes to Significant Social Media Intermediaries, with over 50,00,000 registered users in India, which primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services

    1. Facebook, Instagram, and Threads (Meta) a. Content Actioned counts any enforcement, not only removals (e.g., removals, warning screens/covering, age gates, downranking). b. Proactive Rate = (items found & actioned proactively) ÷ (total content actioned).

    2. X/Twitter a. Child Sexual Exploitation and terrorism suspensions are largely proactive, flagged using proprietary tools and industry hash-sharing systems. b. Data reflects global enforcement, not only India.

    3. Google / YouTube a. Number of removal actions as a result of automated detection captures actions triggered by automated systems (ML + human-trained models).

    4. ShareChat a. Content Removed / Taken Down / UGC discard / Comments/Chatrooms deleted are standardised as Content Removed. b. Also includes rights-holder reporting workflow for copyright/IP and automated proactive monitoring for harmful content.

    5. WhatsApp a. Reports Proactively Banned Accounts, meaning accounts banned before any user reports.

    6. Koo a. Distinguishes between Content Removed, Content Actioned (flagged/downranked), and Account Banned. b. Automation Correct/Wrong reflect AI moderation accuracy, not enforcement outcomes.

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Google. Transparency Report: YouTube Community Guidelines Enforcement (www.google.com, 2022) (2022). The Number of Videos Removed by YouTube, by Source of First Detection [Dataset]. https://evidencehub.net/chart/the-number-of-videos-removed-by-youtube-by-source-of-first-detection-279.0
Organization logo

The Number of Videos Removed by YouTube, by Source of First Detection

Explore at:
jsonAvailable download formats
Dataset updated
Jun 14, 2022
Dataset provided by
The Lisbon Council
Authors
Google. Transparency Report: YouTube Community Guidelines Enforcement (www.google.com, 2022)
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Area covered
YouTube
Measurement technique
Self-reporting
Description

The chart number of videos removed by YouTube for the period October 2017-March 2022, by first source of detection (automated flagging or human detection). Flags from human detection can come from a user or a member of YouTube’s Trusted Flagger program,which include individuals, NGOs, and government agencies. The chart shows that the number of automated flagging is significantly higher compared to human detection. When it comes to human detection, the biggest number of removed videos were first noticed by users, followed by individual trusted flaggers, NGOs and government agencies.

Search
Clear search
Close search
Google apps
Main menu