100+ datasets found
  1. CIFAKE: Real and AI-Generated Synthetic Images

    • kaggle.com
    Updated Mar 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jordan J. Bird (2023). CIFAKE: Real and AI-Generated Synthetic Images [Dataset]. https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 28, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jordan J. Bird
    Description

    CIFAKE: Real and AI-Generated Synthetic Images

    The quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.

    CIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?

    Further information on this dataset can be found here: Bird, J.J. and Lotfi, A., 2024. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. IEEE Access.

    Dataset details

    The dataset contains two classes - REAL and FAKE.

    For REAL, we collected the images from Krizhevsky & Hinton's CIFAR-10 dataset

    For the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4

    There are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)

    Papers with Code

    The dataset and all studies using it are linked using Papers with Code https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images

    References

    If you use this dataset, you must cite the following sources

    Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.

    Bird, J.J. and Lotfi, A., 2024. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. IEEE Access.

    Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2024). The Bird & Lotfi study is available here.

    Notes

    The updates to the dataset on the 28th of March 2023 did not change anything; the file formats ".jpeg" were renamed ".jpg" and the root folder was uploaded to meet Kaggle's usability requirements.

    License

    This dataset is published under the same MIT license as CIFAR-10:

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

  2. D

    TiCaM: Synthetic Images Dataset

    • datasetninja.com
    Updated May 23, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jigyasa Katrolia; Jason Raphael Rambach; Bruno Mirbach (2021). TiCaM: Synthetic Images Dataset [Dataset]. https://datasetninja.com/ticam-synthetic-images
    Explore at:
    Dataset updated
    May 23, 2021
    Dataset provided by
    Dataset Ninja
    Authors
    Jigyasa Katrolia; Jason Raphael Rambach; Bruno Mirbach
    License

    https://spdx.org/licenses/https://spdx.org/licenses/

    Description

    TiCaM Synthectic Images: A Time-of-Flight In-Car Cabin Monitoring Dataset is a time-of-flight dataset of car in-cabin images providing means to test extensive car cabin monitoring systems based on deep learning methods. The authors provide a synthetic image dataset of car cabin images similar to the real dataset leveraging advanced simulation software’s capability to generate abundant data with little effort. This can be used to test domain adaptation between synthetic and real data for select classes. For both datasets the authors provide ground truth annotations for 2D and 3D object detection, as well as for instance segmentation.

  3. g

    Synthetic datasets

    • generated.photos
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Generated Media, Inc. (2020). Synthetic datasets [Dataset]. https://generated.photos/datasets
    Explore at:
    Dataset updated
    Jun 25, 2024
    Dataset authored and provided by
    Generated Media, Inc.
    Description

    100% synthetic. Based on model-released photos. Can be used for any purpose except for the ones violating the law. Worldwide. Different backgrounds: colored, transparent, photographic. Diversity: ethnicity, demographics, facial expressions, and poses.

  4. FU-LoRA: Synthetic Fetal Ultrasound Images for Standard Anatomical Planes...

    • zenodo.org
    zip
    Updated Aug 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fangyijie Wang; Fangyijie Wang (2024). FU-LoRA: Synthetic Fetal Ultrasound Images for Standard Anatomical Planes Classification [Dataset]. http://doi.org/10.48550/arxiv.2407.20072
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 7, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Fangyijie Wang; Fangyijie Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 21, 2024
    Description

    Abstract

    Developing robust deep learning models for fetal ultrasound image analysis requires comprehensive, high-quality datasets to effectively learn informative data representations within the domain. However, the scarcity of labelled ultrasound images poses substantial challenges, especially in low-resource settings. To tackle this challenge, we leverage synthetic data to enhance the generalizability of deep learning models. This study proposes a diffusion-based method, Fetal Ultrasound LoRA (FU-LoRA), which involves fine-tuning latent diffusion models using the LoRA technique to generate synthetic fetal ultrasound images. These synthetic images are integrated into a hybrid dataset that combines real-world and synthetic images to improve the performance of zero-shot classifiers in low-resource settings. Our experimental results on fetal ultrasound images from African cohorts demonstrate that FU-LoRA outperforms the baseline method by a 13.73% increase in zero-shot classification accuracy. Furthermore, FU-LoRA achieves the highest accuracy of 82.40%, the highest F-score of 86.54%, and the highest AUC of 89.78%. It demonstrates that the FU-LoRA method is effective in the zero-shot classification of fetal ultrasound images in low-resource settings. Our code and data are publicly accessible on GitHub.

    Method

    Our FU-LoRA method: Fine-tuning the pre-trained latent diffusion model (LDM) [2] using the LoRA method on a small fetal ultrasound dataset from high-resource settings (HRS). This approach integrates synthetic images to enhance generalization and performance of deep learning models. We conduct three fine-tuning sessions for the diffusion model to generate three LoRA models with different hyper-parameters: alpha in [8, 32, 128], and r in [8, 32, 128]. The merging rate alpha/r is fixed to 1. The purpose of this operation is to delve deeper into LoRA to uncover optimizations that can improve the model's performance and evaluate the effectiveness of parameter r in generating synthetic images.

    Datasets

    The Spanish dataset (URL) in HRS includes 1,792 patient records in Spain [1]. All images are acquired during screening in the second and third trimesters of pregnancy using six different machines operated by operators with similar expertise. We randomly selected 20 Spanish ultrasound images from each of the five maternal–fetal planes (Abdomen, Brain, Femur, Thorax, and Other) to fine-tune the LDM using LoRA technique, and 1150 Spanish images (230 x 5 planes) to create the hybrid dataset. In summary, fine-tuning the LDM utilizes 100 images including 85 patients. Training downstream classifiers uses 6148 images from 612 patients. Within the 6148 images used for training, a subset of 200 images is randomly selected for validation purposes. The hybrid dataset employed in this study has a total of 1150 Spanish images, representing 486 patients.

    We create the synthetic dataset comprising 5000 fetal ultrasound images (500 x 2 samplers x 5 planes) accessible to the open-source community. The generation process utilizes our LoRA model Rank r = 128 with Euler and UniPC samplers known for their efficiency. Subsequently, we integrate this synthetic dataset with a small amount of Spanish data to create a hybrid dataset.

    Implementation Details

    The hyper-parameters of LoRA models are defined as follows: batch size to 2; LoRA learning rate to 1e-4; total training steps to 10000; LoRA dimension to 128; mixed precision selection to fp16; learning scheduler to constant; and input size (resolution) to 512. The model is trained on a single NVIDIA RTX A5000, 24 GB with 8-bit Adam optimizer on PyTorch.

  5. D

    Synthetic Image Data Platform Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Synthetic Image Data Platform Market Research Report 2033 [Dataset]. https://dataintelo.com/report/synthetic-image-data-platform-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Image Data Platform Market Outlook



    According to our latest research, the global synthetic image data platform market size reached USD 1.27 billion in 2024, demonstrating robust momentum driven by surging demand for high-quality, scalable training data across industries. The market is projected to expand at an impressive CAGR of 32.8% from 2025 to 2033, reaching an estimated USD 15.42 billion by 2033. This remarkable growth is primarily fueled by the rapid advancements in artificial intelligence and machine learning technologies, which require vast and diverse datasets for model training and validation.



    One of the most significant growth factors for the synthetic image data platform market is the exponential increase in the adoption of computer vision and AI-driven applications across diverse sectors. As organizations strive to enhance the accuracy and reliability of AI models, the need for vast, annotated, and bias-free image datasets has become paramount. Traditional data collection methods often fall short in providing the scale and diversity required, leading to the rise of synthetic image data platforms that generate realistic, customizable, and scenario-specific imagery. This approach not only accelerates the development cycle but also ensures privacy compliance and cost efficiency, making it a preferred choice for enterprises seeking to gain a competitive edge.



    Another critical driver is the growing emphasis on data privacy and regulatory compliance, particularly in sensitive sectors such as healthcare, automotive, and finance. Synthetic image data platforms enable organizations to create data that is free from personally identifiable information, mitigating the risks associated with data breaches and regulatory violations. Additionally, these platforms empower companies to simulate rare or dangerous scenarios that are difficult or unethical to capture in the real world, such as medical anomalies or edge cases in autonomous vehicle development. This capability is proving indispensable for improving model robustness and safety, further propelling market growth.



    Technological advancements in generative AI, such as GANs (Generative Adversarial Networks) and diffusion models, have significantly enhanced the realism and utility of synthetic images. These innovations are making synthetic data nearly indistinguishable from real-world data, thereby increasing its adoption across sectors including robotics, retail, security, and surveillance. The integration of synthetic image data platforms with cloud-based environments and MLOps pipelines is also streamlining data generation and model training processes, reducing time-to-market for AI solutions. As a result, organizations of all sizes are increasingly leveraging these platforms to overcome data bottlenecks and accelerate innovation.



    Regionally, North America continues to dominate the synthetic image data platform market, accounting for the largest share in 2024, followed closely by Europe and Asia Pacific. The United States, in particular, benefits from a strong ecosystem of AI startups, established technology giants, and significant investments in research and development. Europe is witnessing substantial growth driven by stringent data protection regulations and a focus on ethical AI, while Asia Pacific is emerging as a high-growth region due to rapid digitalization and government-led AI initiatives. Latin America and the Middle East & Africa, though still nascent markets, are expected to register notable growth rates as awareness and adoption of synthetic data solutions expand.



    Component Analysis



    The synthetic image data platform market by component is segmented into software and services, each playing a pivotal role in the ecosystem’s development and adoption. The software segment, which includes proprietary synthetic data generation tools, simulation engines, and integration APIs, held the majority share in 2024. This dominance is attributed to the increasing sophistication of synthetic image generation algorithms, which enable users to create highly realistic and customizable datasets tailored to specific use cases. The software platforms are continuously evolving, incorporating advanced features such as automated data annotation, scenario simulation, and seamless integration with existing machine learning workflows, thus enhancing operational efficiency and scalability for end-users.



    The services segment, encompassing consulting, implementation, t

  6. pGAN Synthetic Dataset: A Deep Learning Approach to Private Data Sharing of...

    • zenodo.org
    bin
    Updated Jun 26, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jason Plawinski; Hanxi Sun; Sajanth Subramaniam; Amir Jamaludin; Timor Kadir; Aimee Readie; Gregory Ligozio; David Ohlssen; Thibaud Coroller; Mark Baillie; Jason Plawinski; Hanxi Sun; Sajanth Subramaniam; Amir Jamaludin; Timor Kadir; Aimee Readie; Gregory Ligozio; David Ohlssen; Thibaud Coroller; Mark Baillie (2021). pGAN Synthetic Dataset: A Deep Learning Approach to Private Data Sharing of Medical Images Using Conditional GANs [Dataset]. http://doi.org/10.5281/zenodo.5031881
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 26, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jason Plawinski; Hanxi Sun; Sajanth Subramaniam; Amir Jamaludin; Timor Kadir; Aimee Readie; Gregory Ligozio; David Ohlssen; Thibaud Coroller; Mark Baillie; Jason Plawinski; Hanxi Sun; Sajanth Subramaniam; Amir Jamaludin; Timor Kadir; Aimee Readie; Gregory Ligozio; David Ohlssen; Thibaud Coroller; Mark Baillie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Synthetic dataset for A Deep Learning Approach to Private Data Sharing of Medical Images Using Conditional GANs

    Dataset specification:

    • MRI images of Vertebral Units labelled based on region
    • Dataset is comprised of 10000 pairs of images and labels
    • Image and label pair number k can be selected by: synthetic_dataset['images'][k] and synthetic_dataset['regions'][k]
    • Images are 3D of size (9, 64, 64)
    • Regions are stored as an integer. Mapping is 0: cervical, 1: thoracic, 2: lumbar

    Arxiv paper: https://arxiv.org/abs/2106.13199
    Github code: https://github.com/tcoroller/pGAN/

    Abstract:

    Sharing data from clinical studies can facilitate innovative data-driven research and ultimately lead to better public health. However, sharing biomedical data can put sensitive personal information at risk. This is usually solved by anonymization, which is a slow and expensive process. An alternative to anonymization is sharing a synthetic dataset that bears a behaviour similar to the real data but preserves privacy. As part of the collaboration between Novartis and the Oxford Big Data Institute, we generate a synthetic dataset based on COSENTYX Ankylosing Spondylitis (AS) clinical study. We apply an Auxiliary Classifier GAN (ac-GAN) to generate synthetic magnetic resonance images (MRIs) of vertebral units (VUs). The images are conditioned on the VU location (cervical, thoracic and lumbar). In this paper, we present a method for generating a synthetic dataset and conduct an in-depth analysis on its properties of along three key metrics: image fidelity, sample diversity and dataset privacy.

  7. h

    synthetic-multiturn-multimodal

    • huggingface.co
    Updated Jan 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mesolitica (2024). synthetic-multiturn-multimodal [Dataset]. https://huggingface.co/datasets/mesolitica/synthetic-multiturn-multimodal
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 28, 2024
    Dataset authored and provided by
    Mesolitica
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Multiturn Multimodal

    We want to generate synthetic data that able to understand position and relationship between multi-images and multi-audio, example as below, All notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/chatbot/multiturn-multimodal

      multi-images
    

    synthetic-multi-images-relationship.jsonl, 100000 rows, 109MB. Images at https://huggingface.co/datasets/mesolitica/translated-LLaVA-Pretrain/tree/main

      Example data
    

    {'filename':… See the full description on the dataset page: https://huggingface.co/datasets/mesolitica/synthetic-multiturn-multimodal.

  8. Z

    Synthetic river flow videos dataset

    • data.niaid.nih.gov
    Updated Aug 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guillaume Bodart; Jérôme Le Coz; Magali Jodeau; Alexandre Hauet (2022). Synthetic river flow videos dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6257391
    Explore at:
    Dataset updated
    Aug 3, 2022
    Dataset provided by
    INRAE, UR RiverLy, 5 rue de la Doua 69100 Villeurbanne, France, jerome.lecoz@inrae.fr
    EDF R&D, LHSV, 6 quai Watier 78400 Chatou, France, guillaume.bodart@inrae.fr
    EDF-DTG, 134-200 chemin de l'étang, 38950 St Martin-le-Vinoux, France, alexandre.hauet@edf.fr
    EDF R&D, LHSV, 6 quai Watier 78400 Chatou, France, magali.jodeau@edf.fr
    Authors
    Guillaume Bodart; Jérôme Le Coz; Magali Jodeau; Alexandre Hauet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    ######### ########## ###### #########

    Synthetic river flow videos for evaluating image-based velocimetry methods

    ######### ########## ###### #########

    Year : 2022

    Authors : G.Bodart (guillaume.bodart@inrae.fr), J.Le Coz (jerome.lecoz@inrae.fr), M.Jodeau (magali.jodeau@edf.fr), A.Hauet (alexandre.hauet@edf.fr)

    ### This file describes the data attached to the article

    -> 00_article_cases

    This folder contains the data used in the case studies: synthetic videos + reference files.

    - 00_reference_velocities
      -> Reference velocities interpolated on a regular grid. Data are given in conventionnal units, i.e. m/s and m.
    
    
    - 01_XX
      -> Data of the first case study
    
    
    - 02_XX
      -> Data of the second case study
    

    -> 01_dev

    This folder contains the Python libraries and Mantaflow modified source code used in the paper. The libraries are provided as is. Feel free to contact us for support or guidelines.

    - lspiv
      -> Python library used to extract, process and display results of LSPIV analysis carried out with Fudaa-LSPIV
    
    
    - mantaflow-modified
      -> Modified version of Mantaflow described in the article. Installation instructions can be found at http://mantaflow.com
    
    
    - syri
      -> Python library used to extract, process and display fluid simulations carried out on Mantaflow and Blender. (Require the lspiv library)
    

    -> 02_dataset

    This folder contains synthetic videos generated with the method described in the article. The fluid simulation parameters, and thus the reference velocities, are the same as those presented in the article.

    • The videos can be used freely. Please consider citing the corresponding paper.
  9. G

    Synthetic Data Generation Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Data Generation Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-data-generation-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Aug 29, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Data Generation Market Outlook




    According to our latest research, the global synthetic data generation market size reached USD 1.6 billion in 2024, demonstrating robust expansion driven by increasing demand for high-quality, privacy-preserving datasets. The market is projected to grow at a CAGR of 38.2% over the forecast period, reaching USD 19.2 billion by 2033. This remarkable growth trajectory is fueled by the growing adoption of artificial intelligence (AI) and machine learning (ML) technologies across industries, coupled with stringent data privacy regulations that necessitate innovative data solutions. As per our latest research, organizations worldwide are increasingly leveraging synthetic data to address data scarcity, enhance AI model training, and ensure compliance with evolving privacy standards.




    One of the primary growth factors for the synthetic data generation market is the rising emphasis on data privacy and regulatory compliance. With the implementation of stringent data protection laws such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States, enterprises are under immense pressure to safeguard sensitive information. Synthetic data offers a compelling solution by enabling organizations to generate artificial datasets that mirror the statistical properties of real data without exposing personally identifiable information. This not only facilitates regulatory compliance but also empowers organizations to innovate without the risk of data breaches or privacy violations. As businesses increasingly recognize the value of privacy-preserving data, the demand for advanced synthetic data generation solutions is set to surge.




    Another significant driver is the exponential growth in AI and ML adoption across various sectors, including healthcare, finance, automotive, and retail. High-quality, diverse, and unbiased data is the cornerstone of effective AI model development. However, acquiring such data is often challenging due to privacy concerns, limited availability, or high acquisition costs. Synthetic data generation bridges this gap by providing scalable, customizable datasets tailored to specific use cases, thereby accelerating AI training and reducing dependency on real-world data. Organizations are leveraging synthetic data to enhance algorithm performance, mitigate data bias, and simulate rare events, which are otherwise difficult to capture in real datasets. This capability is particularly valuable in sectors like autonomous vehicles, where training models on rare but critical scenarios is essential for safety and reliability.




    Furthermore, the growing complexity of data types—ranging from tabular and image data to text, audio, and video—has amplified the need for versatile synthetic data generation tools. Enterprises are increasingly seeking solutions that can generate multi-modal synthetic datasets to support diverse applications such as fraud detection, product testing, and quality assurance. The flexibility offered by synthetic data generation platforms enables organizations to simulate a wide array of scenarios, test software systems, and validate AI models in controlled environments. This not only enhances operational efficiency but also drives innovation by enabling rapid prototyping and experimentation. As the digital ecosystem continues to evolve, the ability to generate synthetic data across various formats will be a critical differentiator for businesses striving to maintain a competitive edge.




    Regionally, North America leads the synthetic data generation market, accounting for the largest revenue share in 2024, followed closely by Europe and Asia Pacific. The dominance of North America can be attributed to the strong presence of technology giants, advanced research institutions, and a favorable regulatory environment that encourages AI innovation. Europe is witnessing rapid growth due to proactive data privacy regulations and increasing investments in digital transformation initiatives. Meanwhile, Asia Pacific is emerging as a high-growth region, driven by the proliferation of digital technologies and rising adoption of AI-powered solutions across industries. Latin America and the Middle East & Africa are also expected to experience steady growth, supported by government-led digitalization programs and expanding IT infrastructure.



    The emergence of <a href="https://growthmarketreports.com/report/synthe

  10. u

    Unimelb Corridor Synthetic dataset

    • figshare.unimelb.edu.au
    png
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Debaditya Acharya; KOUROSH KHOSHELHAM; STEPHAN WINTER (2023). Unimelb Corridor Synthetic dataset [Dataset]. http://doi.org/10.26188/5dd8b8085b191
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    The University of Melbourne
    Authors
    Debaditya Acharya; KOUROSH KHOSHELHAM; STEPHAN WINTER
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data-set is a supplementary material related to the generation of synthetic images of a corridor in the University of Melbourne, Australia from a building information model (BIM). This data-set was generated to check the ability of deep learning algorithms to learn task of indoor localisation from synthetic images, when being tested on real images. =============================================================================The following is the name convention used for the data-sets. The brackets show the number of images in the data-set.REAL DATAReal
    ---------------------> Real images (949 images)

    Gradmag-Real -------> Gradmag of real data (949 images)SYNTHETIC DATASyn-Car
    ----------------> Cartoonish images (2500 images)

    Syn-pho-real ----------> Synthetic photo-realistic images (2500 images)

    Syn-pho-real-tex -----> Synthetic photo-realistic textured (2500 images)

    Syn-Edge --------------> Edge render images (2500 images)

    Gradmag-Syn-Car ---> Gradmag of Cartoonish images (2500 images)=============================================================================Each folder contains the images and their respective groundtruth poses in the following format [ImageName X Y Z w p q r].To generate the synthetic data-set, we define a trajectory in the 3D indoor model. The points in the trajectory serve as the ground truth poses of the synthetic images. The height of the trajectory was kept in the range of 1.5–1.8 m from the floor, which is the usual height of holding a camera in hand. Artificial point light sources were placed to illuminate the corridor (except for Edge render images). The length of the trajectory was approximately 30 m. A virtual camera was moved along the trajectory to render four different sets of synthetic images in Blender*. The intrinsic parameters of the virtual camera were kept identical to the real camera (VGA resolution, focal length of 3.5 mm, no distortion modeled). We have rendered images along the trajectory at 0.05 m interval and ± 10° tilt.The main difference between the cartoonish (Syn-car) and photo-realistic images (Syn-pho-real) is the model of rendering. Photo-realistic rendering is a physics-based model that traces the path of light rays in the scene, which is similar to the real world, whereas the cartoonish rendering roughly traces the path of light rays. The photorealistic textured images (Syn-pho-real-tex) were rendered by adding repeating synthetic textures to the 3D indoor model, such as the textures of brick, carpet and wooden ceiling. The realism of the photo-realistic rendering comes at the cost of rendering times. However, the rendering times of the photo-realistic data-sets were considerably reduced with the help of a GPU. Note that the naming convention used for the data-sets (e.g. Cartoonish) is according to Blender terminology.An additional data-set (Gradmag-Syn-car) was derived from the cartoonish images by taking the edge gradient magnitude of the images and suppressing weak edges below a threshold. The edge rendered images (Syn-edge) were generated by rendering only the edges of the 3D indoor model, without taking into account the lighting conditions. This data-set is similar to the Gradmag-Syn-car data-set, however, does not contain the effect of illumination of the scene, such as reflections and shadows.*Blender is an open-source 3D computer graphics software and finds its applications in video games, animated films, simulation and visual art. For more information please visit: http://www.blender.orgPlease cite the papers if you use the data-set:1) Acharya, D., Khoshelham, K., and Winter, S., 2019. BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images. ISPRS Journal of Photogrammetry and Remote Sensing. 150: 245-258.2) Acharya, D., Singha Roy, S., Khoshelham, K. and Winter, S. 2019. Modelling uncertainty of single image indoor localisation using a 3D model and deep learning. In ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, IV-2/W5, pages 247-254.

  11. G

    Privacy-Preserving Synthetic Images Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Privacy-Preserving Synthetic Images Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/privacy-preserving-synthetic-images-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Privacy-Preserving Synthetic Images Market Outlook



    According to our latest research, the global privacy-preserving synthetic images market size reached USD 1.42 billion in 2024, reflecting robust adoption across data-sensitive industries. This market is projected to grow at a CAGR of 32.7% from 2025 to 2033, reaching a forecasted value of USD 19.13 billion by 2033. The remarkable growth trajectory is fueled by the increasing demand for secure data sharing, stringent data privacy regulations, and the proliferation of artificial intelligence (AI) and machine learning (ML) applications that require high-quality, privacy-compliant datasets.




    One of the primary growth drivers of the privacy-preserving synthetic images market is the intensifying focus on data privacy and security. As organizations across sectors grapple with stricter regulations such as GDPR in Europe, CCPA in California, and similar frameworks worldwide, the need to anonymize and protect sensitive information has become paramount. Synthetic images, generated through advanced AI algorithms, offer a compelling solution by enabling organizations to create realistic but entirely artificial datasets that do not compromise individual privacy. This allows businesses to innovate and extract insights from data without risking regulatory penalties or reputational damage, thereby accelerating the adoption of privacy-preserving synthetic image technologies.




    Another significant factor propelling market growth is the rapid expansion of AI and ML-driven applications that require vast amounts of annotated image data. Traditional data collection methods are often hampered by privacy concerns, limited accessibility, and high costs. By leveraging synthetic images, enterprises can overcome these barriers, generating diverse, scalable, and bias-mitigated datasets for training and validating AI models. This is particularly critical in sectors such as healthcare, finance, and autonomous vehicles, where real-world data is both sensitive and scarce. The ability to generate synthetic images that closely mimic real-world scenarios, while ensuring privacy, is unlocking new opportunities for innovation and operational efficiency across industries.




    Furthermore, the increasing sophistication of generative models, such as Generative Adversarial Networks (GANs) and diffusion models, has significantly enhanced the realism and utility of synthetic images. These technological advancements are enabling more nuanced privacy preservation techniques, such as differential privacy and federated learning, which further bolster the appeal of synthetic data solutions. As a result, the market is witnessing heightened investment from both established technology vendors and emerging startups, leading to rapid product development, ecosystem expansion, and competitive differentiation. The convergence of regulatory pressures, technological innovation, and growing enterprise awareness is expected to sustain the momentum of the privacy-preserving synthetic images market throughout the forecast period.




    From a regional perspective, North America currently dominates the global market, accounting for approximately 41% of the total revenue in 2024, driven by early technology adoption, a mature regulatory landscape, and significant R&D investments. Europe follows closely, with a market share of 28%, reflecting the region’s proactive stance on data privacy and robust public sector engagement. Asia Pacific is emerging as the fastest-growing region, propelled by digital transformation initiatives, rising AI adoption, and increasing awareness of data privacy issues. Meanwhile, Latin America and the Middle East & Africa are witnessing steady growth, albeit from a smaller base, as organizations in these regions gradually embrace privacy-preserving synthetic image technologies to address local regulatory and market needs.





    Component Analysis



    The privacy-preserving synthetic images market is segmented by component into software, hardware,

  12. f

    Supplemental Synthetic Images (outdated)

    • figshare.com
    • resodate.org
    zip
    Updated May 7, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Duke Bass Connections Deep Learning for Rare Energy Infrastructure 2020-2021 (2021). Supplemental Synthetic Images (outdated) [Dataset]. http://doi.org/10.6084/m9.figshare.13546643.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2021
    Dataset provided by
    figshare
    Authors
    Duke Bass Connections Deep Learning for Rare Energy Infrastructure 2020-2021
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    OverviewThis is a set of synthetic overhead imagery of wind turbines that was created with CityEngine. There are corresponding labels that provide the class, x and y coordinates, and height and width (YOLOv3 format) of the ground truth bounding boxes for each wind turbine in the images. These labels are named similarly to the images (e.g. image.png will have the label titled image.txt)..UseThis dataset is meant as supplementation to training an object detection model on overhead images of wind turbines. It can be added to the training set of an object detection model to potentially improve performance when using the model on real overhead images of wind turbines.WhyThis dataset was created to examine the utility of adding synthetic imagery to the training set of an object detection model to improve performance on rare objects. Since wind turbines are both very rare in number and sparse, this makes acquiring data very costly. This synthetic imagery is meant to solve this issue by automating the generation of new training data. The use of synthetic imagery can also be applied to the issue of cross-domain testing, where the model lacks training data on a particular region and consequently struggles when used on that region.MethodThe process for creating the dataset involved selecting background images from NAIP imagery available on Earth OnDemand. These images were randomlyselected from these geographies: forest, farmland, grasslands, water, urban/suburban,mountains, and deserts. No consideration was put into whether the background images would seem realistic. This is because we wanted to see if this would help the model become better at detecting wind turbines regardless of their context (which would help when using the model on novel geographies). Then, a script was used to select these at random and uniformly generate 3D models of large wind turbines over the image and then position the virtual camera to save four 608x608 pixel images. This process was repeated with the same random seed, but with no background image and the wind turbines colored as black. Next, these black and white images were converted into ground truth labels by grouping the black pixels in the images.

  13. g

    ArtiFact: Real and Fake Image Dataset

    • gts.ai
    zip
    Updated Jul 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). ArtiFact: Real and Fake Image Dataset [Dataset]. https://gts.ai/dataset-download/artifact-real-and-fake-image-dataset/
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 26, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The ArtiFact dataset is a comprehensive collection of 2.5 million real and synthetic images across diverse categories. It includes real images from 8 sources and synthetic images generated using 25 methods, such as GANs, diffusion models, and other generators. The dataset provides a robust benchmark for evaluating synthetic image detectors under real-world conditions.

  14. Raw Synthetic Particle Image Dataset (RSPID)

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Nov 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michel Machado; Michel Machado; Douglas Rocha; Douglas Rocha (2023). Raw Synthetic Particle Image Dataset (RSPID) [Dataset]. http://doi.org/10.5281/zenodo.7832205
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 2, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michel Machado; Michel Machado; Douglas Rocha; Douglas Rocha
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Synthetic Particle Image Velocimetry (PIV) data generated by PIV Image Generator Software. Which is a tool that generates synthetic Particle Imaging Velocimetry (PIV) images with the purpose of validating and benchmarking PIV and Optical Flow methods in tracer based imaging for fluid mechanics (Mendes et al., 2020).

    This data was generated with the following parameters:

    • image width: 665 pixels;
    • image height: 630 pixels;
    • bit depth: 8 bits;
    • particle radius: 1, 2, 3, 4 pixels;
    • particle density: 15, 17, 20, 23, 25, 32 particles;
    • delta x factor: 0.05, 0.1, 0.15, 0.2, 0.25 %;
    • noise level: 1, 5, 10, 15;
    • out-of-plane standard deviation: 0.01, 0.025, 0.05;
    • flows: rankine uniform, rankine vortex, parabolic, stagnation, shear, decaying vortex.

  15. h

    google-image-malaysian-vehicle-synthetic-qa

    • huggingface.co
    Updated May 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mesolitica (2024). google-image-malaysian-vehicle-synthetic-qa [Dataset]. https://huggingface.co/datasets/mesolitica/google-image-malaysian-vehicle-synthetic-qa
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 28, 2024
    Dataset authored and provided by
    Mesolitica
    Description
  16. T

    Text-to-Image Generator Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Mar 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Text-to-Image Generator Report [Dataset]. https://www.archivemarketresearch.com/reports/text-to-image-generator-53813
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Mar 8, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The text-to-image generator market is experiencing explosive growth, driven by advancements in artificial intelligence, particularly in deep learning and diffusion models. The market, estimated at $2 billion in 2025, is projected to witness a robust Compound Annual Growth Rate (CAGR) of 35% from 2025 to 2033. This significant expansion is fueled by increasing adoption across diverse sectors, including advertising, art creation, and various other applications. The accessibility of powerful generative models through cloud-based platforms and APIs is lowering the barrier to entry for both individual artists and large corporations, fostering innovation and wider market penetration. Key players like OpenAI, Google, and Stability AI are at the forefront of this revolution, constantly releasing improved models and expanding their service offerings. The market is further segmented by application type (advertising, art, and others) and terminal type (mobile and PC). While the initial adoption is heavily skewed toward North America and Europe, rapid growth is anticipated in regions like Asia Pacific and the Middle East & Africa as awareness and internet penetration increase. The restraints to market growth primarily involve concerns around ethical implications, such as potential misuse for creating deepfakes or copyright infringement issues. However, ongoing developments in watermarking technologies and responsible AI practices are actively addressing these challenges. The future of the market hinges on further technological advancements, including improving the realism and controllability of generated images, expanding the range of supported styles and applications, and successfully navigating the legal and ethical complexities inherent in this rapidly evolving technology. This rapid expansion suggests significant investment opportunities, particularly in research and development, platform development, and the provision of related services and tools. The market is expected to mature over the next decade, but maintaining its impressive growth trajectory requires continuous innovation and responsible development.

  17. Data from: TrueFace: a Dataset for the Detection of Synthetic Face Images...

    • zenodo.org
    • data.niaid.nih.gov
    xz
    Updated Oct 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Giulia Boato; Cecilia Pasquini; Antonio Luigi Stefani; Sebastiano Verde; Daniele Miorandi; Giulia Boato; Cecilia Pasquini; Antonio Luigi Stefani; Sebastiano Verde; Daniele Miorandi (2022). TrueFace: a Dataset for the Detection of Synthetic Face Images from Social Networks [Dataset]. http://doi.org/10.5281/zenodo.7065064
    Explore at:
    xzAvailable download formats
    Dataset updated
    Oct 13, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Giulia Boato; Cecilia Pasquini; Antonio Luigi Stefani; Sebastiano Verde; Daniele Miorandi; Giulia Boato; Cecilia Pasquini; Antonio Luigi Stefani; Sebastiano Verde; Daniele Miorandi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    TrueFace is a first dataset of social media processed real and synthetic faces, obtained by the successful StyleGAN generative models, and shared on Facebook, Twitter and Telegram.

    Images have historically been a universal and cross-cultural communication medium, capable of reaching people of any social background, status or education. Unsurprisingly though, their social impact has often been exploited for malicious purposes, like spreading misinformation and manipulating public opinion. With today's technologies, the possibility to generate highly realistic fakes is within everyone's reach. A major threat derives in particular from the use of synthetically generated faces, which are able to deceive even the most experienced observer. To contrast this fake news phenomenon, researchers have employed artificial intelligence to detect synthetic images by analysing patterns and artifacts introduced by the generative models. However, most online images are subject to repeated sharing operations by social media platforms. Said platforms process uploaded images by applying operations (like compression) that progressively degrade those useful forensic traces, compromising the effectiveness of the developed detectors. To solve the synthetic-vs-real problem "in the wild", more realistic image databases, like TrueFace, are needed to train specialised detectors.

  18. G

    Synthetic Training Data Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Training Data Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-training-data-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Aug 29, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Training Data Market Outlook



    According to our latest research, the global synthetic training data market size in 2024 is valued at USD 1.45 billion, demonstrating robust momentum as organizations increasingly adopt artificial intelligence and machine learning solutions. The market is projected to grow at a remarkable CAGR of 38.7% from 2025 to 2033, reaching an estimated USD 22.46 billion by 2033. This exponential growth is primarily driven by the rising demand for high-quality, diverse, and privacy-compliant datasets that fuel advanced AI models, as well as the escalating need for scalable data solutions across various industries.




    One of the primary growth factors propelling the synthetic training data market is the escalating complexity and diversity of AI and machine learning applications. As organizations strive to develop more accurate and robust AI models, the need for vast amounts of annotated and high-quality training data has surged. Traditional data collection methods are often hampered by privacy concerns, high costs, and time-consuming processes. Synthetic training data, generated through advanced algorithms and simulation tools, offers a compelling alternative by providing scalable, customizable, and bias-mitigated datasets. This enables organizations to accelerate model development, improve performance, and comply with evolving data privacy regulations such as GDPR and CCPA, thus driving widespread adoption across sectors like healthcare, finance, autonomous vehicles, and robotics.




    Another significant driver is the increasing adoption of synthetic data for data augmentation and rare event simulation. In sectors such as autonomous vehicles, manufacturing, and robotics, real-world data for edge-case scenarios or rare events is often scarce or difficult to capture. Synthetic training data allows for the generation of these critical scenarios at scale, enabling AI systems to learn and adapt to complex, unpredictable environments. This not only enhances model robustness but also reduces the risk associated with deploying AI in safety-critical applications. The flexibility to generate diverse data types, including images, text, audio, video, and tabular data, further expands the applicability of synthetic data solutions, making them indispensable tools for innovation and competitive advantage.




    The synthetic training data market is also experiencing rapid growth due to the heightened focus on data privacy and regulatory compliance. As data protection regulations become more stringent worldwide, organizations face increasing challenges in accessing and utilizing real-world data for AI training without violating user privacy. Synthetic data addresses this challenge by creating realistic yet entirely artificial datasets that preserve the statistical properties of original data without exposing sensitive information. This capability is particularly valuable for industries such as BFSI, healthcare, and government, where data sensitivity and compliance requirements are paramount. As a result, the adoption of synthetic training data is expected to accelerate further as organizations seek to balance innovation with ethical and legal responsibilities.




    From a regional perspective, North America currently leads the synthetic training data market, driven by the presence of major technology companies, robust R&D investments, and early adoption of AI technologies. However, the Asia Pacific region is anticipated to witness the highest growth rate during the forecast period, fueled by expanding AI initiatives, government support, and the rapid digital transformation of industries. Europe is also emerging as a key market, particularly in sectors where data privacy and regulatory compliance are critical. Latin America and the Middle East & Africa are gradually increasing their market share as awareness and adoption of synthetic data solutions grow. Overall, the global landscape is characterized by dynamic regional trends, with each region contributing uniquely to the marketÂ’s expansion.



    The introduction of a Synthetic Data Generation Engine has revolutionized the way organizations approach data creation and management. This engine leverages cutting-edge algorithms to produce high-quality synthetic datasets that mirror real-world data without compromising privacy. By sim

  19. Z

    Data from: Domain-adaptive Data Synthesis for Large-scale Supermarket...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Apr 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Strohmayer, Julian; Kampel, Martin (2024). Domain-adaptive Data Synthesis for Large-scale Supermarket Product Recognition [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7750241
    Explore at:
    Dataset updated
    Apr 5, 2024
    Dataset provided by
    TU Wien, Computer Vision Lab
    Authors
    Strohmayer, Julian; Kampel, Martin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Domain-Adaptive Data Synthesis for Large-Scale Supermarket Product Recognition

    This repository contains the data synthesis pipeline and synthetic product recognition datasets proposed in [1].

    Data Synthesis Pipeline:

    We provide the Blender 3.1 project files and Python source code of our data synthesis pipeline pipeline.zip, accompanied by the FastCUT models used for synthetic-to-real domain translation models.zip. For the synthesis of new shelf images, a product assortment list and product images must be provided in the corresponding directories products/assortment/ and products/img/. The pipeline expects product images to follow the naming convention c.png, with c corresponding to a GTIN or generic class label (e.g., 9120050882171.png). The assortment list, assortment.csv, is expected to use the sample format [c, w, d, h], with c being the class label and w, d, and h being the packaging dimensions of the given product in mm (e.g., [4004218143128, 140, 70, 160]). The assortment list to use and the number of images to generate can be specified in generateImages.py (see comments). The rendering process is initiated by either executing load.py from within Blender or within a command-line terminal as a background process.

    Datasets:

    SG3k - Synthetic GroZi-3.2k (SG3k) dataset, consisting of 10,000 synthetic shelf images with 851,801 instances of 3,234 GroZi-3.2k products. Instance-level bounding boxes and generic class labels are provided for all product instances.

    SG3kt - Domain-translated version of SGI3k, utilizing GroZi-3.2k as the target domain. Instance-level bounding boxes and generic class labels are provided for all product instances.

    SGI3k - Synthetic GroZi-3.2k (SG3k) dataset, consisting of 10,000 synthetic shelf images with 838,696 instances of 1,063 GroZi-3.2k products. Instance-level bounding boxes and generic class labels are provided for all product instances.

    SGI3kt - Domain-translated version of SGI3k, utilizing GroZi-3.2k as the target domain. Instance-level bounding boxes and generic class labels are provided for all product instances.

    SPS8k - Synthetic Product Shelves 8k (SPS8k) dataset, comprised of 16,224 synthetic shelf images with 1,981,967 instances of 8,112 supermarket products. Instance-level bounding boxes and GTIN class labels are provided for all product instances.

    SPS8kt - Domain-translated version of SPS8k, utilizing SKU110k as the target domain. Instance-level bounding boxes and GTIN class labels for all product instances.

    Table 1: Dataset characteristics.

    Dataset

    images

    products

    instances

    labels
    translation

    SG3k 10,000 3,234 851,801 bounding box & generic class¹ none

    SG3kt 10,000 3,234 851,801 bounding box & generic class¹ GroZi-3.2k

    SGI3k 10,000 1,063 838,696 bounding box & generic class² none

    SGI3kt 10,000 1,063 838,696 bounding box & generic class² GroZi-3.2k

    SPS8k 16,224 8,112 1,981,967 bounding box & GTIN none

    SPS8kt 16,224 8,112 1,981,967 bounding box & GTIN SKU110k

    Sample Format

    A sample consists of an RGB image (i.png) and an accompanying label file (i.txt), which contains the labels for all product instances present in the image. Labels use the YOLO format [c, x, y, w, h].

    ¹SG3k and SG3kt use generic pseudo-GTIN class labels, created by combining the GroZi-3.2k food product category number i (1-27) with the product image index j (j.jpg), following the convention i0000j (e.g., 13000097).

    ²SGI3k and SGI3kt use the generic GroZi-3.2k class labels from https://arxiv.org/abs/2003.06800.

    Download and UseThis data may be used for non-commercial research purposes only. If you publish material based on this data, we request that you include a reference to our paper [1].

    [1] Strohmayer, Julian, and Martin Kampel. "Domain-Adaptive Data Synthesis for Large-Scale Supermarket Product Recognition." International Conference on Computer Analysis of Images and Patterns. Cham: Springer Nature Switzerland, 2023.

    BibTeX citation:

    @inproceedings{strohmayer2023domain, title={Domain-Adaptive Data Synthesis for Large-Scale Supermarket Product Recognition}, author={Strohmayer, Julian and Kampel, Martin}, booktitle={International Conference on Computer Analysis of Images and Patterns}, pages={239--250}, year={2023}, organization={Springer} }

  20. G

    Synthetic Evaluation Data Generation Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Evaluation Data Generation Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-evaluation-data-generation-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Oct 3, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Evaluation Data Generation Market Outlook



    According to our latest research, the synthetic evaluation data generation market size reached USD 1.4 billion globally in 2024, reflecting robust growth driven by the increasing need for high-quality, privacy-compliant data in AI and machine learning applications. The market demonstrated a remarkable CAGR of 32.8% from 2025 to 2033. By the end of 2033, the synthetic evaluation data generation market is forecasted to attain a value of USD 17.7 billion. This surge is primarily attributed to the escalating adoption of AI-driven solutions across industries, stringent data privacy regulations, and the critical demand for diverse, scalable, and bias-free datasets for model training and validation.




    One of the primary growth factors propelling the synthetic evaluation data generation market is the rapid acceleration of artificial intelligence and machine learning deployments across various sectors such as healthcare, finance, automotive, and retail. As organizations strive to enhance the accuracy and reliability of their AI models, the need for diverse and unbiased datasets has become paramount. However, accessing large volumes of real-world data is often hindered by privacy concerns, data scarcity, and regulatory constraints. Synthetic data generation bridges this gap by enabling the creation of realistic, scalable, and customizable datasets that mimic real-world scenarios without exposing sensitive information. This capability not only accelerates the development and validation of AI systems but also ensures compliance with data protection regulations such as GDPR and HIPAA, making it an indispensable tool for modern enterprises.




    Another significant driver for the synthetic evaluation data generation market is the growing emphasis on data privacy and security. With increasing incidents of data breaches and the rising cost of non-compliance, organizations are actively seeking solutions that allow them to leverage data for training and testing AI models without compromising confidentiality. Synthetic data generation provides a viable alternative by producing datasets that retain the statistical properties and utility of original data while eliminating direct identifiers and sensitive attributes. This allows companies to innovate rapidly, collaborate more openly, and share data across borders without legal impediments. Furthermore, the use of synthetic data supports advanced use cases such as adversarial testing, rare event simulation, and stress testing, further expanding its applicability across verticals.




    The synthetic evaluation data generation market is also experiencing growth due to advancements in generative AI technologies, including Generative Adversarial Networks (GANs) and large language models. These technologies have significantly improved the fidelity, diversity, and utility of synthetic datasets, making them nearly indistinguishable from real data in many applications. The ability to generate synthetic text, images, audio, video, and tabular data has opened new avenues for innovation in model training, testing, and validation. Additionally, the integration of synthetic data generation tools into cloud-based platforms and machine learning pipelines has simplified adoption for organizations of all sizes, further accelerating market growth.




    From a regional perspective, North America continues to dominate the synthetic evaluation data generation market, accounting for the largest share in 2024. This is largely due to the presence of leading technology vendors, early adoption of AI technologies, and a strong focus on data privacy and regulatory compliance. Europe follows closely, driven by stringent data protection laws and increased investment in AI research and development. The Asia Pacific region is expected to witness the fastest growth during the forecast period, fueled by rapid digital transformation, expanding AI ecosystems, and increasing government initiatives to promote data-driven innovation. Latin America and the Middle East & Africa are also emerging as promising markets, albeit at a slower pace, as organizations in these regions begin to recognize the value of synthetic data for AI and analytics applications.



Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jordan J. Bird (2023). CIFAKE: Real and AI-Generated Synthetic Images [Dataset]. https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images
Organization logo

CIFAKE: Real and AI-Generated Synthetic Images

Can Computer Vision detect when images have been generated by AI?

Explore at:
7 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Mar 28, 2023
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Jordan J. Bird
Description

CIFAKE: Real and AI-Generated Synthetic Images

The quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.

CIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?

Further information on this dataset can be found here: Bird, J.J. and Lotfi, A., 2024. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. IEEE Access.

Dataset details

The dataset contains two classes - REAL and FAKE.

For REAL, we collected the images from Krizhevsky & Hinton's CIFAR-10 dataset

For the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4

There are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)

Papers with Code

The dataset and all studies using it are linked using Papers with Code https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images

References

If you use this dataset, you must cite the following sources

Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.

Bird, J.J. and Lotfi, A., 2024. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. IEEE Access.

Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2024). The Bird & Lotfi study is available here.

Notes

The updates to the dataset on the 28th of March 2023 did not change anything; the file formats ".jpeg" were renamed ".jpg" and the root folder was uploaded to meet Kaggle's usability requirements.

License

This dataset is published under the same MIT license as CIFAR-10:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Search
Clear search
Close search
Google apps
Main menu