77 datasets found
  1. U.S. adults worry about AI-generated deep fakes 2025, by gender

    • statista.com
    Updated Jul 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). U.S. adults worry about AI-generated deep fakes 2025, by gender [Dataset]. https://www.statista.com/statistics/1470862/us-adults-fear-of-ai-deepfakes-by-gender/
    Explore at:
    Dataset updated
    Jul 7, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Mar 5, 2025 - Mar 7, 2025
    Area covered
    United States
    Description

    According to a survey conducted in March 2025, ** percent of adult female respondents in the United States expressed concerns about the spread of artificial intelligence (AI) video and audio deepfakes. Similarly, nearly ** percent of men shared this concern. In contrast, only *** percent of adult women and *** percent of adult men in the U.S. reported that they were not concerned at all.

  2. D

    Deepfake AI Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Jan 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Deepfake AI Report [Dataset]. https://www.marketresearchforecast.com/reports/deepfake-ai-15461
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Jan 31, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Deepfake AI Market Analysis The global deepfake AI market is poised for significant growth, with a market size valued at USD XXX million in 2025 and projected to reach USD XXX million by 2033, exhibiting a CAGR of XX% during the forecast period 2025-2033. Key drivers of this expansion include rising concerns over privacy and misinformation, the proliferation of social media, and the increasing availability of user data used for deepfake creation. Market Segments, Trends, and Restrains The deepfake AI market is segmented based on type (software and service), application (finance and insurance, telecommunications, government and defense, health care, and others), and region. Software solutions dominate the market currently, driven by the growing demand for advanced deepfake detection and protection technologies. Key trends in the market include the emergence of deepfake-as-a-service (DaaS) models, the integration of AI and machine learning for enhanced deepfake detection, and increased regulatory scrutiny aimed at mitigating potential risks associated with deepfake technology. However, concerns about ethical implications, legal liability, and technical challenges in detecting highly sophisticated deepfakes pose potential restraints to market growth.

  3. Global consumers on spotting deepfake videos 2022

    • statista.com
    Updated Apr 10, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Global consumers on spotting deepfake videos 2022 [Dataset]. https://www.statista.com/statistics/1367702/global-consumers-detecting-deepfakes/
    Explore at:
    Dataset updated
    Apr 10, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2022
    Area covered
    Worldwide
    Description

    Artificial intelligence-generated deepfakes are videos or photos that can be used to depict someone speaking or doing something that they did not actually say or do. Deepfakes are being used more frequently in cybercrime. A 2022 survey found that 57 percent of global consumers claimed they could detect a deepfake video, whilst 43 percent said they would not be able to tell the difference between a deepfake video and a real video.

  4. Z

    MIT focus group data on AI and deepfake technology

    • data.niaid.nih.gov
    • zenodo.org
    Updated Dec 10, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elena Denia (2024). MIT focus group data on AI and deepfake technology [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11061770
    Explore at:
    Dataset updated
    Dec 10, 2024
    Dataset provided by
    Elena Denia
    John Durant
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interview protocols, recordings, and transcripts of three focus groups to investigate the social perception of AI and deepfake technology at the Massachusetts Institute of Technology. The focus groups are described below:

    Focus Group #1 (engaged public): 12 participants in a 3-session Make A Fake class; the students were offered a full course refund in return for their participation in the study, which took place immediately following the final session of the class on Monday 27 February, 2023.

    Focus Group #2 (attentive public) 14 visitors to the MIT Museum who volunteered to participate in the discussion after being recruited in the museum itself. The activity was scheduled for the week following recruitment, Monday 24 April, 2023, and as compensation for their involvement participants were offered a refund of their museum admission fee, and two more tickets for another day.

    Focus Group #3 (nonattentive public): 13 pedestrians who were recruited with the help of 4 MIT volunteers working in the immediate environs of the Boston Public Library and the adjacent Prudential Center Shopping Mall. Participants were offered a $70 Amazon Gift Card in consideration for one hour of conversation on the same day of their recruitment, Saturday 27 May, 2023.

    NOTE: Recordings from different devices are attached to better capture the voices of each conversation (devices: MacBook Air and iPad Pro).

  5. DeepFake_PreProcessed_Image

    • kaggle.com
    Updated Dec 19, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vidit Agarwal (2021). DeepFake_PreProcessed_Image [Dataset]. https://www.kaggle.com/datasets/viditagarwal112/deepfake-preprocessed-image
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 19, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Vidit Agarwal
    Description

    This data consists of Images produced by extracting the faces from Deep Fake Videos available at https://www.kaggle.com/competitions/deepfake-detection/data by running the Data PreProcessing Cell of this https://www.kaggle.com/viditagarwal112/deepfake-detection-inceptionv3 notebook

  6. C

    Replication Data for: 'Deepfakes: evolution and trends'

    • dataverse.csuc.cat
    • portalrecerca.udl.cat
    • +1more
    csv
    Updated Jun 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roberto García; Roberto García; Rosa Gil; Rosa Gil; Jordi Virgili-Gomà; Jordi Virgili-Gomà; Juan-Miguel López-Gil; Juan-Miguel López-Gil (2023). Replication Data for: 'Deepfakes: evolution and trends' [Dataset]. http://doi.org/10.34810/data750
    Explore at:
    csv(3834101)Available download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    CORA.Repositori de Dades de Recerca
    Authors
    Roberto García; Roberto García; Rosa Gil; Rosa Gil; Jordi Virgili-Gomà; Jordi Virgili-Gomà; Juan-Miguel López-Gil; Juan-Miguel López-Gil
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This study conducts research on deepfakes technology evolution and trends based on a bibliometric analysis of the articles published on this topic along with six research questions: What are the main research areas of the articles in deepfakes? What are the main current topics in deepfakes research and how are they related? Which are the trends in deepfakes research? How do topics in deepfakes research change over time? Who is researching deepfakes? Who is funding deepfakes research? We have found a total of 331 research articles about deepfakes in an analysis carried out on the Web of Science and Scopus databases. This data serves to provide a complete overview of deepfakes. Main insights include: different areas in which deepfakes research is being performed; which areas are the emerging ones, those that are considered basic, and those that currently have the most potential for development; most studied topics on deepfakes research, including the different artificial intelligence methods applied; emerging and niche topics; relationships among the most prominent researchers; the countries where deepfakes research is performed; main funding institutions. This paper identifies the current trends and opportunities in deepfakes research for practitioners and researchers who want to get into this topic.

  7. Z

    Data from: WaveFake: A data set to facilitate audio DeepFake detection

    • data.niaid.nih.gov
    Updated Jul 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Schönherr, Lea (2024). WaveFake: A data set to facilitate audio DeepFake detection [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4904578
    Explore at:
    Dataset updated
    Jul 18, 2024
    Dataset provided by
    Schönherr, Lea
    Frank, Joel
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The main purpose of this data set is to facilitate research into audio DeepFakes. We hope that this work helps in finding new detection methods to prevent such attempts. These generated media files have been increasingly used to commit impersonation attempts or online harassment.

    The data set consists of 104,885 generated audio clips (16-bit PCM wav). We examine multiple networks trained on two reference data sets. First, the LJSpeech data set consisting of 13,100 short audio clips (on average 6 seconds each; roughly 24 hours total) read by a female speaker. It features passages from 7 non-fiction books and the audio was recorded on a MacBook Pro microphone. Second, we include samples based on the JSUT data set, specifically, basic5000 corpus. This corpus consists of 5,000 sentences covering all basic kanji of the Japanese language (4.8 seconds on average; roughly 6.7 hours total). The recordings were performed by a female native Japanese speaker in an anechoic room. Finally, we include samples from a full text-to-speech pipeline (16,283 phrases; 3.8s on average; roughly 17.5 hours total). Thus, our data set consists of approximately 175 hours of generated audio files in total. Note that we do not redistribute the reference data.

    We included a range of architectures in our data set:

    MelGAN

    Parallel WaveGAN

    Multi-Band MelGAN

    Full-Band MelGAN

    WaveGlow

    Additionally, we examined a bigger version of MelGAN and include samples from a full TTS-pipeline consisting of a conformer and parallel WaveGAN model.

    Collection Process

    For WaveGlow, we utilize the official implementation (commit 8afb643) in conjunction with the official pre-trained network on PyTorch Hub. We use a popular implementation available on GitHub (commit 12c677e) for the remaining networks. The repository also offers pre-trained models. We used the pre-trained networks to generate samples that are similar to their respective training distributions, LJ Speech and JSUT. When sampling the data set, we first extract Mel spectrograms from the original audio files, using the pre-processing scripts of the corresponding repositories. We then feed these Mel spectrograms to the respective models to obtain the data set. For sampling the full TTS results, we use the ESPnet project. To make sure the generated phrases do not overlap with the training set, we downloaded the common voices data set and extracted 16.285 phrases from it.

    This data set is licensed with a CC-BY-SA 4.0 license.

    This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2092 CaSa -- 390781972.

  8. m

    Deepfake Detection Market Size | CAGR of 47.6%

    • market.us
    csv, pdf
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market.us (2025). Deepfake Detection Market Size | CAGR of 47.6% [Dataset]. https://market.us/report/deepfake-detection-market/
    Explore at:
    csv, pdfAvailable download formats
    Dataset updated
    Apr 15, 2025
    Dataset provided by
    Market.us
    License

    https://market.us/privacy-policy/https://market.us/privacy-policy/

    Time period covered
    2022 - 2032
    Area covered
    Global
    Description

    Deepfake Detection Market is estimated to reach USD 5,609.3 Million By 2034, Riding on a Strong 47.6% CAGR throughout the forecast period.

  9. Data from: DeepFakeNews

    • zenodo.org
    csv, zip
    Updated Jun 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enrico Nello; Enrico Nello (2024). DeepFakeNews [Dataset]. http://doi.org/10.5281/zenodo.11186584
    Explore at:
    csv, zipAvailable download formats
    Dataset updated
    Jun 20, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Enrico Nello; Enrico Nello
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    The DeepFakeNews dataset is a novel and comprehensive dataset designed for the detection of both deepfakes and fake news. This dataset is an extension and enhancement of the existing Fakeddit fake news dataset (i strongly suggest to read the related paper (here) from the authors to better understand this dataset), with significant modifications to cater specifically to the complexities of modern misinformation.

    • Enhancements: Derived from the Fakeddit fake news dataset, the DeepFakeNews dataset comprehends a total of 509,916 images and has been enriched with 254,958 deepfake images generated using three different generative models: Stable Diffusion 2, Dreamlike, and GLIDE.
    • Balance and Composition: The dataset is perfectly balanced, containing an equal number of pristine (authentic) and generated (deepfake) images.
    • Removal of Hand-Modified Content: The original "manipulated content" category from Fakeddit, which consisted of images altered or modified by hand, has been removed. These have been replaced with deepfakes to provide a more relevant and challenging set of synthetic images.
    • Cleaning and Quality Control: The Fakeddit dataset was thoroughly cleaned, removing any images that were not found, contained only logos, or were otherwise unsuitable for deepfake detection. This cleaning process ensures a higher quality and more reliable dataset for training and evaluation.
    • Application: The DeepFakeNews dataset is suitable for both deepfake detection and fake news detection. Its diverse and balanced nature makes it an excellent benchmark for evaluating multimodal detection systems that analyze both visual and textual content.

    The dataset comes with three CSV files for training, testing, and validation sets, along with corresponding zip files containing the split images for each set. The deepfake images are named in both the CSV files and the image filenames following a specific format based on the generative model used: "SD_fake_imageid" for Stable Diffusion, "GL_fake_imageid" for GLIDE, and "DL_fake_imageid" for Dreamlike.

    The Deepfake Generation Pipeline involves a 2 steps approach:

    1. first generating a caption for a pristine image using a captioning model.
    2. then feeding this caption into a generative model to create a new synthetic image.

    By incorporating images from multiple generative technologies, the dataset is designed to prevent any bias towards a single generation method in the training process of detection models. This choice aims to enhance the generalization capabilities of models trained on this dataset, enabling them to effectively recognize and flag deepfake content produced by a variety of different methods, not just the ones they have been exposed to during training. The other half consists of pristine, unaltered images to ensure a balanced dataset, crucial for unbiased training and evaluation of detection models.

    The dataset has been structured to maintain retrocompatibility with the original Fakeddit dataset. All samples have retained their original Fakeddit class labels (6_way_label), allowing for fine-grained fake news detection across the five original categories: True, Satire/Parody, False Connection, Imposter Content, and Misleading Content. This feature ensures that the DeepFakeNews dataset can be used not only for multimodal and unimodal deepfake detection but also for traditional fake news detection tasks. It offers a versatile resource for a wide range of research scenarios, enhancing its utility in the field of digital misinformation detection.

    For full info and details about dataset creation, cleaning pipeline, composition and generation process please refer to my Master Thesis.

  10. Reported cases of illegally created deepfake material South Korea 2021-2023

    • statista.com
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Reported cases of illegally created deepfake material South Korea 2021-2023 [Dataset]. https://www.statista.com/statistics/1498714/south-korea-illegally-created-deepfake-pornography-report/
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    South Korea
    Description

    In 2023, the police in South Korea recorded *** individual cases of illegally created deepfake sexual content on the internet. The number of such reports has increased slightly over the last three years. Recently, the issue of illegally created deepfakes has gotten more attention in South Korea as a set of Telegram rooms distributing AI deepfake pornography has been discovered.

  11. h

    verichain-deepfake-data

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Rafly Ash Shiddiqi, verichain-deepfake-data [Dataset]. https://huggingface.co/datasets/einrafh/verichain-deepfake-data
    Explore at:
    Authors
    Muhammad Rafly Ash Shiddiqi
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    VeriChain Deepfake Detection Dataset

      Dataset Description
    

    This repository hosts the dataset for the VeriChain project, specifically curated for classifying images into three distinct categories: Real, AI-Generated, and Deepfake. The data is intended for training and evaluating robust models capable of identifying manipulated or synthetic media. This dataset was sourced and processed from the original AI-vs-Deepfake-vs-Real dataset.

      Dataset Structure
    

    The data… See the full description on the dataset page: https://huggingface.co/datasets/einrafh/verichain-deepfake-data.

  12. Deepfake Detection - Faces - Part 13_0

    • kaggle.com
    Updated Feb 14, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hieu Phung (2020). Deepfake Detection - Faces - Part 13_0 [Dataset]. https://www.kaggle.com/datasets/phunghieu/deepfake-detection-faces-part-13-0/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 14, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Hieu Phung
    Description

    Context

    This dataset includes all detectable faces of the corresponding part of the full dataset. Kaggle and the host expected and encouraged us to train our models outside of Kaggle’s notebooks environment; however, for someone who prefers to stick to Kaggle's kernels, this dataset would help a lot 😄.

    Usage

    Can be used for a variety purpose, e.g. classification, etc.

    Want something to start? Let check this demo 😉.

  13. P

    Data from: Diffusion Deepfake Dataset

    • paperswithcode.com
    Updated Apr 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chaitali Bhattacharyya; Hanxiao Wang; Feng Zhang; Sungho Kim; Xiatian Zhu (2024). Diffusion Deepfake Dataset [Dataset]. https://paperswithcode.com/dataset/diffusion-deepfake
    Explore at:
    Dataset updated
    Apr 1, 2024
    Authors
    Chaitali Bhattacharyya; Hanxiao Wang; Feng Zhang; Sungho Kim; Xiatian Zhu
    Description

    Human face Deepfake dataset sampled from large datasets

    High Quality Dataset Diverse Dataset Challenging Dataset Large Dataset Text prompts

  14. m

    SDFVD2.0: Extension of Small Scale Deep Fake Video Dataset

    • data.mendeley.com
    Updated Jan 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shilpa Kaman (2025). SDFVD2.0: Extension of Small Scale Deep Fake Video Dataset [Dataset]. http://doi.org/10.17632/zzb7jyy8w8.1
    Explore at:
    Dataset updated
    Jan 27, 2025
    Authors
    Shilpa Kaman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The SDFVD 2.0 is an augmented extension of the original SDFVD dataset, which originally contained 53 real and 53 fake videos. This new version has been created to enhance the diversity and robustness of the dataset by applying various augmentation techniques like horizontal flip, rotation, shear, brightness and contrast adjustment, additive gaussian noise, downscaling and upscaling to the original videos. These augmentations help simulate a wider range of conditions and variations, making the dataset more suitable for training and evaluating deep learning models for deepfake detection. This process has significantly expanded the dataset resulting in 461 real and 461 forged videos, providing a richer and more varied collection of video data for deepfake detection research and development. Dataset Structure The dataset is organized into two main directories: real and fake, each containing the original and augmented videos. Each augmented video file is named following the pattern: ‘

  15. Data material for dramatic deepfake

    • figshare.com
    xlsx
    Updated May 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenhui Guo (2025). Data material for dramatic deepfake [Dataset]. http://doi.org/10.6084/m9.figshare.29019476.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 11, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Wenhui Guo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset for the article: dramatic deepfake tales of the world: Analogical reasoning, AI-generated political (mis-)infotainment, and the distortion of global affairs

  16. h

    deepfake-ecg-small

    • huggingface.co
    Updated Apr 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DeepSynthBody (2023). deepfake-ecg-small [Dataset]. https://huggingface.co/datasets/deepsynthbody/deepfake-ecg-small
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 24, 2023
    Dataset authored and provided by
    DeepSynthBody
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ECG Dataset

    This repository contains an small version of the ECG dataset: https://huggingface.co/datasets/deepsynthbody/deepfake_ecg, split into training, validation, and test sets. The dataset is provided as CSV files and corresponding ECG data files in .asc format. The ECG data files are organized into separate folders for the train, validation, and test sets.

      Folder Structure
    

    . ├── train.csv ├── validate.csv ├── test.csv ├── train │ ├── file_1.asc │ ├── file_2.asc… See the full description on the dataset page: https://huggingface.co/datasets/deepsynthbody/deepfake-ecg-small.

  17. H

    Data from: A Comprehensive Analysis of Public Discourse and Content Trends...

    • dataverse.harvard.edu
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmet Yiğitalp Tulga (2025). A Comprehensive Analysis of Public Discourse and Content Trends in Turkish Reddit Posts Related to Deepfake [Dataset]. http://doi.org/10.7910/DVN/ZNCOXI
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 26, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Ahmet Yiğitalp Tulga
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This study investigates the content and changes in deepfakes-related discussions on 5,220 Turkish Reddit posts from October 2019 to August 2023. Although the academic community has shown an increasing interest in deepfakes since 2017, focusing on detection methods and the technology itself, scant attention has been paid to public perceptions and online debate. The analysis reveals that 69.4% of the examined posts feature deepfake content with sexual themes, with celebrity women being the primary targets in 60.2% of cases. In contrast, 22% of the content is about politics and political figures, while 8.6% provides technical guidance on creating deepfakes. The study also observes content changes over time, noticing a rise in sexually explicit deepfake posts, particularly involving celebrities. However, in May 2023, coinciding with the presidential and general elections in Türkiye, discussions about politics and political figures have significantly increased. This study sheds light on the changing landscape of discussions, emphasizing the predominant presence of sexual content and the increasing prevalence of political content, particularly during election seasons.

  18. Deepfake Detection Accelerator Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Deepfake Detection Accelerator Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/deepfake-detection-accelerator-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Deepfake Detection Accelerator Market Outlook



    According to our latest research, the global Deepfake Detection Accelerator market size in 2024 is valued at USD 1.23 billion, reflecting a robust response to the growing threat of synthetic media and manipulated content. The market is expected to expand at a remarkable CAGR of 28.7% from 2025 to 2033, reaching a forecasted value of USD 10.18 billion by 2033. This substantial growth is driven by increasing awareness of the risks associated with deepfakes, rapid advancements in artificial intelligence, and a surge in demand for real-time content authentication across diverse sectors. As per our latest research, the proliferation of deepfake technologies and the resulting security and reputational risks are compelling organizations and governments to invest significantly in detection accelerators, thereby propelling market expansion.




    One of the primary growth factors for the Deepfake Detection Accelerator market is the exponential increase in the creation and dissemination of deepfake content across digital platforms. As deepfakes become more sophisticated and accessible, businesses, media outlets, and public institutions are recognizing the urgent need for robust detection solutions. The proliferation of social media, coupled with the ease of sharing multimedia content, has heightened the risk of misinformation, identity theft, and reputational damage. This has led to a surge in investments in advanced deepfake detection technologies, particularly accelerators that can process and analyze vast volumes of data in real time. The growing public awareness about the potential societal and economic impacts of deepfakes is further fueling the adoption of these solutions.




    Another significant driver is the rapid evolution of artificial intelligence and machine learning algorithms, which are the backbone of deepfake detection accelerators. The ability to leverage AI-powered hardware and software for identifying manipulated content has substantially improved detection accuracy and speed. Enterprises and governments are increasingly relying on these accelerators to safeguard sensitive information, ensure content authenticity, and maintain compliance with emerging regulations. The integration of deepfake detection accelerators into existing cybersecurity frameworks is becoming a standard practice, especially in sectors such as finance, healthcare, and government, where data integrity is paramount. This technological synergy is expected to sustain the market’s momentum throughout the forecast period.




    The regulatory landscape is also playing a critical role in shaping the growth trajectory of the Deepfake Detection Accelerator market. Governments across major economies are enacting stringent policies and guidelines to combat the spread of malicious synthetic content. These regulations mandate organizations to implement advanced detection mechanisms, thereby driving the demand for high-performance accelerators. Furthermore, industry collaborations and public-private partnerships are fostering innovation in the development of scalable and interoperable deepfake detection solutions. The increasing frequency of high-profile deepfake incidents is prompting regulatory bodies to accelerate the adoption of these technologies, ensuring market growth remains on an upward trajectory.




    From a regional perspective, North America currently leads the global deepfake detection accelerator market, accounting for the largest share in 2024. This dominance can be attributed to the presence of key technology providers, a mature cybersecurity ecosystem, and proactive regulatory initiatives. Europe follows closely, driven by strict data protection laws and increased investments in AI research. The Asia Pacific region is emerging as a high-growth market, fueled by the rapid digital transformation of its economies and rising concerns about deepfake-related cyber threats. Latin America and the Middle East & Africa are also witnessing increased adoption, albeit at a slower pace, as awareness and infrastructure development continue to progress. Overall, the global market is poised for sustained growth, with regional dynamics playing a pivotal role in shaping future trends.



    "https://growthmarketreports.com/request-sample/24580">
    <button class="btn btn-lg text-center"

  19. c

    Data & Code: Using Deepfakes for Experiments in the Social Sciences – A...

    • datacatalogue.cessda.eu
    • search.gesis.org
    • +1more
    Updated Mar 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eberl, Andreas; Kühn, Juliane; Wolbring, Tobias (2023). Data & Code: Using Deepfakes for Experiments in the Social Sciences – A Pilot Study [Dataset]. http://doi.org/10.7802/2467
    Explore at:
    Dataset updated
    Mar 11, 2023
    Dataset provided by
    Universität Erlangen-Nürnberg
    Authors
    Eberl, Andreas; Kühn, Juliane; Wolbring, Tobias
    Measurement technique
    Web-based experiment
    Description

    The advent of deepfakes – the manipulation of audio records, images and videos based on deep learning techniques – has important implications for science and society. Current studies focus primarily on the detection and dangers of deepfakes. In contrast, less attention is paid to the potential of this technology for substantive research – particularly as an approach for controlled experimental manipulations in the social sciences. In this paper, we aim to fill this research gap and argue that deepfakes can be a valuable tool for conducting social science experiments. To demonstrate some of the potentials and pitfalls of deepfakes, we conducted a pilot study on the effects of physical attractiveness on student evaluations of teachers. To this end, we created a deepfake video varying the physical attractiveness of the instructor as compared to the original video and asked students to rate the presentation and instructor. First, our results show that social scientists without special knowledge in computational science can successfully create a credible deepfake within reasonable time. Student ratings of the quality of the two videos were comparable and students did not detect the deepfake. Second, we use deepfakes to examine a substantive research question: whether there are differences in the ratings of a physically more and a physically less attractive instructor. Our suggestive evidence points towards a beauty penalty. Thus, our study supports the idea that deepfakes can be used to introduce systematic variations into experiments while offering a high degree of experimental control. Finally, we discuss the feasibility of deepfakes as an experimental manipulation and the ethical challenges of using deepfakes in experiments. This is the provision of the data and code.

    Keywords: deepfakes, face swap, deep learning, experiment, physical attractiveness, student evaluations of teachers

  20. o

    MIT focus group data on AI and deepfake technology

    • explore.openaire.eu
    Updated Jan 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The citation is currently not available for this dataset.
    Explore at:
    Dataset updated
    Jan 1, 2024
    Authors
    Elena Denia; John Durant
    Description

    Interview protocols, recordings, and transcripts of three focus groups to investigate the social perception of AI and deepfake technology at the Massachusetts Institute of Technology. The focus groups are described below: Focus Group #1 (engaged public): 12 participants in a 3-session Make A Fake class; the students were offered a full course refund in return for their participation in the study, which took place immediately following the final session of the class on Monday 27 February, 2023. Focus Group #2 (attentive public) 14 visitors to the MIT Museum who volunteered to participate in the discussion after being recruited in the museum itself. The activity was scheduled for the week following recruitment, Monday 24 April, 2023, and as compensation for their involvement participants were offered a refund of their museum admission fee, and two more tickets for another day. Focus Group #3 (nonattentive public): 13 pedestrians who were recruited with the help of 4 MIT volunteers working in the immediate environs of the Boston Public Library and the adjacent Prudential Center Shopping Mall. Participants were offered a $70 Amazon Gift Card in consideration for one hour of conversation on the same day of their recruitment, Saturday 27 May, 2023.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2025). U.S. adults worry about AI-generated deep fakes 2025, by gender [Dataset]. https://www.statista.com/statistics/1470862/us-adults-fear-of-ai-deepfakes-by-gender/
Organization logo

U.S. adults worry about AI-generated deep fakes 2025, by gender

Explore at:
Dataset updated
Jul 7, 2025
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
Mar 5, 2025 - Mar 7, 2025
Area covered
United States
Description

According to a survey conducted in March 2025, ** percent of adult female respondents in the United States expressed concerns about the spread of artificial intelligence (AI) video and audio deepfakes. Similarly, nearly ** percent of men shared this concern. In contrast, only *** percent of adult women and *** percent of adult men in the U.S. reported that they were not concerned at all.

Search
Clear search
Close search
Google apps
Main menu