36 datasets found
  1. Packages Object Detection Dataset - augmented-v1

    • public.roboflow.com
    zip
    Updated Jan 14, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roboflow Community (2021). Packages Object Detection Dataset - augmented-v1 [Dataset]. https://public.roboflow.com/object-detection/packages-dataset/5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 14, 2021
    Dataset provided by
    Roboflowhttps://roboflow.com/
    Authors
    Roboflow Community
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Bounding Boxes of packages
    Description

    About This Dataset

    The Roboflow Packages dataset is a collection of packages located at the doors of various apartments and homes. Packages are flat envelopes, small boxes, and large boxes. Some images contain multiple annotated packages.

    Usage

    This dataset may be used as a good starter dataset to track and identify when a package has been delivered to a home. Perhaps you want to know when a package arrives to claim it quickly or prevent package theft.

    If you plan to use this dataset and adapt it to your own front door, it is recommended that you capture and add images from the context of your specific camera position. You can easily add images to this dataset via the web UI or via the Roboflow Upload API.

    About Roboflow

    Roboflow enables teams to build better computer vision models faster. We provide tools for image collection, organization, labeling, preprocessing, augmentation, training and deployment. :fa-spacer: Developers reduce boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

    Roboflow Wordmark

  2. Vehicle Detection Image Dataset

    • kaggle.com
    zip
    Updated Apr 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Parisa Karimi Darabi (2024). Vehicle Detection Image Dataset [Dataset]. https://www.kaggle.com/datasets/pkdarabi/vehicle-detection-image-dataset
    Explore at:
    zip(274761684 bytes)Available download formats
    Dataset updated
    Apr 9, 2024
    Authors
    Parisa Karimi Darabi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Vehicle Detection Image Dataset

    Introduction

    Welcome to the Vehicle Detection Image Dataset! This dataset is meticulously curated for object detection and tracking tasks, with a specific focus on vehicle detection. It serves as a valuable resource for researchers, developers, and enthusiasts seeking to advance the capabilities of computer vision systems.

    Objective

    The primary aim of this dataset is to facilitate precise object detection tasks, particularly in identifying and tracking vehicles within images. Whether you are engaged in academic research, developing commercial applications, or exploring the frontiers of computer vision, this dataset provides a solid foundation for your projects.

    Preprocessing and Augmentation

    Both versions of the dataset undergo essential preprocessing steps, including resizing and orientation adjustments. Additionally, the Apply_Grayscale version undergoes augmentation to introduce grayscale variations, thereby enriching the dataset and improving model robustness.

    1. Apply_Grayscale

    • This version comprises grayscale images and is further augmented to enhance the diversity of training data.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F14850461%2F4f23bd8094c892d1b6986c767b42baf4%2Fv2.png?generation=1712264632232641&alt=media" alt="">

    2. No_Apply_Grayscale

    • This version includes images without applying grayscale augmentation.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F14850461%2Fbfb10eb2a4db31a62eb4615da824c387%2Fdetails_v1.png?generation=1712264660626280&alt=media" alt="">

    Data Formats

    To ensure compatibility with a wide range of object detection frameworks and tools, each version of the dataset is available in multiple formats:

    1. COCO
    2. YOLOv8
    3. YOLOv9
    4. TensorFlow

    These formats facilitate seamless integration into various machine learning frameworks and libraries, empowering users to leverage their preferred development environments.

    Real-Time Object Detection

    In addition to image datasets, we also provide a video for real-time object detection evaluation. This video allows users to test the performance of their models in real-world scenarios, providing invaluable insights into the effectiveness of their detection algorithms.

    Getting Started

    To begin exploring the Vehicle Detection Image Dataset, simply download the version and format that best suits your project requirements. Whether you are an experienced practitioner or just embarking on your journey in computer vision, this dataset offers a valuable resource for advancing your understanding and capabilities in object detection and tracking tasks.

    Citation

    If you utilize this dataset in your work, we kindly request that you cite the following:

    Parisa Karimi Darabi. (2024). Vehicle Detection Image Dataset: Suitable for Object Detection and tracking Tasks. Retrieved from https://www.kaggle.com/datasets/pkdarabi/vehicle-detection-image-dataset/

    Feedback and Contributions

    I welcome feedback and contributions from the Kaggle community to continually enhance the quality and usability of this dataset. Please feel free to reach out if you have suggestions, questions, or additional data and annotations to contribute. Together, we can drive innovation and progress in computer vision.

  3. Summary of adversarial losses.

    • plos.figshare.com
    xls
    Updated Jun 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Duway Nicolas Lesmes-Leon; Andreas Dengel; Sheraz Ahmed (2025). Summary of adversarial losses. [Dataset]. http://doi.org/10.1371/journal.pone.0291217.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Duway Nicolas Lesmes-Leon; Andreas Dengel; Sheraz Ahmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cell microscopy is the main tool that allows researchers to study microorganisms and plays a key role in observing and understanding the morphology, interactions, and development of microorganisms. However, there exist limitations in both the techniques and the samples that impair the amount of available data to study. Generative adversarial networks (GANs) are a deep learning alternative to alleviate the data availability limitation by generating nonexistent samples that resemble the probability distribution of the real data. The aim of this systematic review is to find trends, common practices, and popular datasets and analyze the impact of GANs in image augmentation of cell microscopy images. We used ScienceDirect, IEEE Xplore, PubMed, bioRxiv, and arXiv to select English research articles that employed GANs to generate any kind of cell microscopy images independently of the main objective of the study. We conducted the data collection using 15 selected features from each study, which allowed us to analyze the results from different perspectives using tables and histograms. 46 studies met the legibility criteria, where 23 had image augmentation as the main task. Moreover, we retrieved 29 publicly available datasets. The results showed a lack of consensus with performance metrics, baselines, and datasets. Additionally, we evidenced the relevance of popular architectures such as StyleGAN and losses, including Vanilla and Wasserstein adversarial losses. This systematic review presents the most popular configurations to perform image augmentation. It also highlights the importance of design good practices and gold standards to guarantee comparability and reproducibility. This review implemented the ROBIS tool to assess the risk of bias, and it was not registered in PROSPERO.

  4. COVID-19 Chest CT image Augmentation GAN Dataset

    • kaggle.com
    zip
    Updated Jan 31, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamed Loey (2021). COVID-19 Chest CT image Augmentation GAN Dataset [Dataset]. https://www.kaggle.com/mloey1/covid19-chest-ct-image-augmentation-gan-dataset
    Explore at:
    zip(1914822990 bytes)Available download formats
    Dataset updated
    Jan 31, 2021
    Authors
    Mohamed Loey
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    Note: please do not claim diagnostic performance of a model without a clinical study! This is not a kaggle competition dataset. Please read our paper: Loey, M., Manogaran, G. & Khalifa, N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput & Applic (2020). https://doi.org/10.1007/s00521-020-05437-x

    Khalifa, N.E.M., Smarandache, F., Manogaran, G. et al. A Study of the Neutrosophic Set Significance on Deep Transfer Learning Models: an Experimental Case on a Limited COVID-19 Chest X-ray Dataset. Cogn Comput (2021). https://doi.org/10.1007/s12559-020-09802-9

    Abstract

    The Coronavirus disease 2019 (COVID-19) is the fastest transmittable virus caused by severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2). The detection of COVID-19 using artificial intelligence techniques and especially deep learning will help to detect this virus in early stages which will reflect in increasing the opportunities of fast recovery of patients worldwide. This will lead to release the pressure off the healthcare system around the world. In this research, classical data augmentation techniques along with Conditional Generative Adversarial Nets (CGAN) based on a deep transfer learning model for COVID-19 detection in chest CT scan images will be presented. The limited benchmark datasets for COVID-19 especially in chest CT images are the main motivation of this research. The main idea is to collect all the possible images for COVID-19 that exists until the very writing of this research and use the classical data augmentations along with CGAN to generate more images to help in the detection of the COVID-19. In this study, five different deep convolutional neural network-based models (AlexNet, VGGNet16, VGGNet19, GoogleNet, and ResNet50) have been selected for the investigation to detect the Coronavirus-infected patient using chest CT radiographs digital images. The classical data augmentations along with CGAN improve the performance of classification in all selected deep transfer models. The outcomes show that ResNet50 is the most appropriate deep learning model to detect the COVID-19 from limited chest CT dataset using the classical data augmentation with testing accuracy of 82.91%, sensitivity 77.66%, and specificity of 87.62%.

    Context

    In this Dataet, we introduce DTL models to classify limited COVID-19 chest CT scan digital images. To input adopting CT images of the chest to the DCNN, we enriched the medical chest CT images using classical data augmentation and CGAN to generate more CT images. After that, a classifier is used to ensemble the class (COVID/NonCOVID) outputs of the classification outcomes. The proposed DTL models were evaluated on the COVID-19 CT scan images dataset. The novelty of this research is conducted as follows: (1) The introduced DTL models have end-to-end structure without classical feature extraction and selection methods. (2) We show that data augmentation and conditional generative adversarial network (CGAN) is an effective technique to generate CT images. (3) Chest CT images are one of the best tools for the classification of COVID-19. (4) The DTL models have been shown to yield very high accuracy in the limited COVID-19 dataset.

    Content

    There are 742 CT images and 2 categories (COVID/NonCOVID). Dataset |Train | Validation | Test COVID NonCOVID COVID NonCOVID COVID NonCOVID COVID-19 191 234 60 58 94 105 COVID-19 + Aug 2292 2808 720 696 94 105 COVID-19 + CGAN 2191 2234 210 208 94 105 COVID-19 + Aug + CGAN 4292 4808 870 846 94 105

    Acknowledgements

    Cite our papers:

    Loey, M., Manogaran, G. & Khalifa, N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput & Applic (2020). https://doi.org/10.1007/s00521-020-05437-x

    Loey, Mohamed; Smarandache, Florentin; M. Khalifa, Nour E. 2020. "Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning" Symmetry 12, no. 4: 651. https://doi.org/10.3390/sym12040651

    Khalifa, N.E.M., Smarandache, F., Manogaran, G. et al. A Study of the Neutrosophic Set Significance on Deep Transfer Learning Models: an Experimental Case on a Limited COVID-19 Chest X-ray Dataset. Cogn Comput (2021). https://doi.org/10.1007/s12559-020-09802-9

    Inspiration

    Original Dataset: https://github.com/UCSD-AI4H/COVID-CT

    Creating the proposed database present...

  5. f

    Summary of the studies that met the eligibility criteria. Publicly available...

    • figshare.com
    xls
    Updated Jun 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Duway Nicolas Lesmes-Leon; Andreas Dengel; Sheraz Ahmed (2025). Summary of the studies that met the eligibility criteria. Publicly available datasets are highlighted with bold text. [Dataset]. http://doi.org/10.1371/journal.pone.0291217.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 24, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Duway Nicolas Lesmes-Leon; Andreas Dengel; Sheraz Ahmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of the studies that met the eligibility criteria. Publicly available datasets are highlighted with bold text.

  6. Cell microscopy datasets used in the studies meeting the eligibility...

    • plos.figshare.com
    xls
    Updated Jun 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Duway Nicolas Lesmes-Leon; Andreas Dengel; Sheraz Ahmed (2025). Cell microscopy datasets used in the studies meeting the eligibility criteria. [Dataset]. http://doi.org/10.1371/journal.pone.0291217.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Duway Nicolas Lesmes-Leon; Andreas Dengel; Sheraz Ahmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cell microscopy datasets used in the studies meeting the eligibility criteria.

  7. A

    AI-powered Image Enhancer and Upscaler Tool Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Mar 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). AI-powered Image Enhancer and Upscaler Tool Report [Dataset]. https://www.archivemarketresearch.com/reports/ai-powered-image-enhancer-and-upscaler-tool-55817
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Mar 11, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The AI-powered image enhancer and upscaler tool market is experiencing robust growth, driven by increasing demand for high-quality images across various sectors. The market, estimated at $2 billion in 2025, is projected to grow at a Compound Annual Growth Rate (CAGR) of 25% from 2025 to 2033. This significant expansion is fueled by several key factors. The rising adoption of AI in image editing, coupled with advancements in deep learning algorithms, allows for superior image enhancement capabilities previously unavailable. This translates to better quality visuals for social media, e-commerce platforms, marketing materials, and professional photography, significantly impacting market demand. Furthermore, the increasing accessibility of cloud-based solutions and the proliferation of affordable AI-powered tools are democratizing access to these technologies, broadening the user base and driving market expansion. The segment breakdown reveals that cloud-based solutions are gaining traction over on-premises options due to cost-effectiveness and scalability. Within applications, the enterprise segment contributes more significantly to the overall revenue compared to the personal segment due to higher budgets and larger-scale image processing needs. The competitive landscape is highly fragmented, with numerous players offering a variety of features and pricing models. While established players like Adobe and Canva are leveraging their existing user base and integrating AI enhancement capabilities into their platforms, numerous specialized AI-powered image enhancement startups are rapidly emerging and innovating. This competitive environment fosters innovation and contributes to the overall market growth. Geographical distribution reveals that North America and Europe currently dominate the market, owing to early adoption of AI technologies and robust digital infrastructure. However, rapid growth is anticipated in Asia Pacific regions, driven by rising internet penetration and increasing adoption of smartphone technology. The market faces certain challenges, primarily related to data privacy concerns surrounding AI-powered image processing and the need for robust algorithms to handle diverse image types effectively. This report provides a comprehensive analysis of the burgeoning AI-powered image enhancer and upscaler tool market, projecting a valuation exceeding several hundred million dollars within the next few years. It delves into the technology's concentration, innovation characteristics, market segmentation, regional trends, and the competitive landscape, identifying key growth catalysts and challenges.

  8. AI-Based Image Analysis Market Analysis, Size, and Forecast 2025-2029: North...

    • technavio.com
    pdf
    Updated Aug 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). AI-Based Image Analysis Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, Italy, and UK), APAC (China, India, and Japan), South America (Brazil), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/ai-based-image-analysis-market-industry
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 21, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Area covered
    United States
    Description

    Snapshot img

    AI-Based Image Analysis Market Size 2025-2029

    The ai-based image analysis market size is valued to increase USD 12.52 billion, at a CAGR of 19.7% from 2024 to 2029. Proliferation of advanced deep learning architectures and multimodal AI will drive the ai-based image analysis market.

    Major Market Trends & Insights

    North America dominated the market and accounted for a 34% growth during the forecast period.
    By Component - Hardware segment was valued at USD 2.4 billion in 2023
    By Technology - Facial recognition segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 310.06 million
    Market Future Opportunities: USD 12518.80 million
    CAGR from 2024 to 2029 : 19.7%
    

    Market Summary

    The market is experiencing significant growth, with recent estimates suggesting it will surpass USD15.5 billion by 2025. This expansion is driven by the proliferation of advanced deep learning architectures and multimodal AI, which are revolutionizing diagnostics and patient care through advanced medical imaging. These technologies enable more accurate and efficient analysis of medical images, reducing the need for human intervention and improving overall patient outcomes. However, the market faces challenges, including stringent data privacy regulations and growing security concerns. Ensuring patient data remains secure and confidential is a top priority, necessitating robust data protection measures. Despite these challenges, the future of AI-based image analysis is bright, with applications extending beyond healthcare to industries such as retail, manufacturing, and agriculture. As AI continues to evolve, it will enable more precise and automated image analysis, leading to improved decision-making and increased operational efficiency.

    What will be the Size of the AI-Based Image Analysis Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    How is the AI-Based Image Analysis Market Segmented ?

    The ai-based image analysis industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. ComponentHardwareSoftwareServicesTechnologyFacial recognitionObject recognitionCode recognitionOptical character recognitionPattern recognitionApplicationScanning and imagingSecurity and surveillanceImage searchAugmented realityMarketing and advertisingEnd-userBFSIMedia and entertainmentRetail and e-commerceHealthcareOthersGeographyNorth AmericaUSCanadaEuropeFranceGermanyItalyUKAPACChinaIndiaJapanSouth AmericaBrazilRest of World (ROW)

    By Component Insights

    The hardware segment is estimated to witness significant growth during the forecast period.

    The market is witnessing significant growth, driven by the increasing demand for automated image processing and analysis in various industries. This market encompasses a range of advanced techniques, including image segmentation, feature extraction, and classification methods, which are integral to applications such as defect detection systems, medical image analysis, and satellite imagery processing. Deep learning models, particularly convolutional neural networks, are at the forefront of this innovation, enabling real-time processing, high accuracy, and scalable architectures. GPU computing plays a crucial role in the market, with NVIDIA Corporation leading the charge. GPUs, known for their parallel processing capabilities, are ideal for training large, complex neural networks on extensive datasets. For instance, GPUs can process thousands of images simultaneously, leading to substantial time savings and improved efficiency. Furthermore, the integration of cloud computing platforms and API integrations facilitates easy access to AI-based image analysis services, while data annotation tools and data augmentation strategies enhance model training pipelines. Precision and recall, F1-score evaluation, and other accuracy metrics are essential for assessing model performance. Object detection algorithms, instance segmentation, and semantic segmentation are key techniques used in image analysis, while transfer learning approaches and pattern recognition systems facilitate the adoption of AI in new applications. Additionally, image enhancement algorithms, noise reduction techniques, and edge computing deployment are crucial for optimizing performance and reducing latency. According to recent market research, The market is projected to grow at a compound annual growth rate of 25.2% between 2021 and 2028, reaching a value of USD33.5 billion by 2028. This growth is fueled by ongoing advancements in GPU computing, deep learning models, and computer vision systems, as well as the increasing adoption of AI in various industries.

    Req

  9. G

    AI Photo Enhancement Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Sep 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). AI Photo Enhancement Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/ai-photo-enhancement-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Sep 1, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    AI Photo Enhancement Market Outlook



    According to our latest research, the global AI photo enhancement market size reached USD 1.35 billion in 2024, demonstrating robust momentum across diverse industries. The market is poised for significant growth, expected to achieve a value of USD 6.41 billion by 2033, expanding at a remarkable CAGR of 18.9% during the forecast period from 2025 to 2033. This growth trajectory is fueled by the increasing adoption of AI-driven image processing solutions, rising demand for high-quality digital content, and the proliferation of advanced imaging technologies across commercial and industrial applications. The marketÂ’s expansion is underpinned by continuous innovation in AI algorithms, greater accessibility of cloud-based services, and the integration of photo enhancement capabilities with emerging technologies such as augmented reality and machine learning.




    One of the primary growth factors propelling the AI photo enhancement market is the exponential rise in digital content consumption across various platforms. As consumers and businesses increasingly rely on visually compelling content for communication, marketing, and entertainment, the need for superior image quality has become paramount. This trend is particularly evident in sectors such as social media, e-commerce, and advertising, where enhanced photos can significantly influence engagement and conversion rates. AI-powered photo enhancement tools enable users to automatically correct lighting, color balance, sharpness, and remove imperfections with unprecedented accuracy and speed, reducing the reliance on manual editing and democratizing access to professional-grade image enhancement.




    Another significant driver is the integration of AI photo enhancement solutions with smartphones, digital cameras, and cloud-based image editing platforms. Leading device manufacturers are embedding AI-powered features directly into hardware, allowing users to capture and process high-quality images in real time. Furthermore, the proliferation of affordable and user-friendly AI software has made advanced photo editing accessible to a broader audience, including amateur photographers and small businesses. The ongoing advancements in deep learning, neural networks, and computer vision are further enhancing the capabilities of these solutions, enabling more sophisticated image restoration, upscaling, and creative editing functionalities.




    The commercial and industrial sectors are also contributing to the marketÂ’s expansion by leveraging AI photo enhancement for applications such as real estate listings, product photography, medical imaging, and surveillance. In e-commerce, for instance, enhanced product images can drive higher sales by providing customers with clearer and more appealing visuals. In real estate, AI-enhanced photos help agents showcase properties in the best light, increasing the likelihood of successful transactions. Additionally, the integration of AI with cloud computing has enabled scalable, collaborative, and cost-effective photo enhancement workflows, making it easier for organizations to manage large volumes of images efficiently.



    The advent of Thumbnail Selection AI is revolutionizing how users interact with digital content, particularly in the realm of AI photo enhancement. This technology leverages sophisticated algorithms to automatically select the most visually appealing and contextually relevant thumbnails from a set of images. By doing so, it enhances user engagement and ensures that the first impression of digital content is impactful. As the demand for high-quality visuals continues to grow, Thumbnail Selection AI is becoming an indispensable tool for content creators, marketers, and businesses aiming to optimize their visual strategies. Its integration with existing AI photo enhancement solutions not only streamlines the content curation process but also elevates the overall aesthetic quality of digital media.




    Regionally, North America dominates the AI photo enhancement market, accounting for the largest share in 2024, followed by Europe and Asia Pacific. The high concentration of technology companies, rapid adoption of AI-driven solutions, and strong presence of digital media and e-commerce platforms are key factors driving the market in North America. Europe is witnessing steady growth, fueled by increasing invest

  10. The model alterations description per epoch during training.

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amin Tajerian; Mohsen Kazemian; Mohammad Tajerian; Ava Akhavan Malayeri (2023). The model alterations description per epoch during training. [Dataset]. http://doi.org/10.1371/journal.pone.0284437.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Amin Tajerian; Mohsen Kazemian; Mohammad Tajerian; Ava Akhavan Malayeri
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The model alterations description per epoch during training.

  11. Punjabi Shahmukhi Alphabet database (Nastaleeq)

    • kaggle.com
    zip
    Updated Mar 17, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    rafique22 (2022). Punjabi Shahmukhi Alphabet database (Nastaleeq) [Dataset]. https://www.kaggle.com/datasets/rafique22/smdb-smharoof/code
    Explore at:
    zip(5898821 bytes)Available download formats
    Dataset updated
    Mar 17, 2022
    Authors
    rafique22
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    The greatest challenge of machine learning problems is to select suitable techniques and resources such as tools and datasets. Despite the existence of millions of speakers around the globe and the rich literary history of more than a thousand years, it is expensive to find the computational linguistic work related to Punjabi Shahmukhi script, a member of the Perso-Arabic context-specific script low-resource language family. The selection of the best algorithm for a machine learning problem heavily depends on the availability of a dataset for that specific task. We present a novel, custom-built, and first-of-its-kind dataset for Punjabi in Shahmukhi script, its design, development, and validation process using Artificial Neural Networks. The dataset uses up to 40 classes, in multiple fonts, including Nasta’leeq, Naskh, and Arabic Type, etc, many font sizes and has been presented in many sub sizes. The dataset has been designed with a special dataset construction process by which researchers can make changes in the dataset as per their requirements.* The dataset construction program can also perform data augmentation to generate millions of images for a machine learning algorithm with different parameters including font type, size orientation, and translation. Using this process, a dataset of any language can be constructed. The CNNs in different architectures have been implemented and validation accuracy of up to 99% has been achieved.

  12. A

    AI Image Enhancer Tool Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jul 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). AI Image Enhancer Tool Report [Dataset]. https://www.datainsightsmarket.com/reports/ai-image-enhancer-tool-495205
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Jul 23, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The AI Image Enhancer Tool market is booming, projected to reach $681 million by 2025 with a 16.5% CAGR. Discover key drivers, trends, and leading companies shaping this rapidly evolving sector. Explore market analysis, future predictions, and the top AI image enhancement software.

  13. Z

    1QIsaa data collection (binarized images, feature files, and plotting...

    • data-staging.niaid.nih.gov
    • zenodo.org
    • +1more
    Updated Jan 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Popović, Mladen; Dhali, Maruf A.; Schomaker, Lambert (2021). 1QIsaa data collection (binarized images, feature files, and plotting scripts) for writer identification test using artificial intelligence and image-based pattern recognition techniques [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_4469995
    Explore at:
    Dataset updated
    Jan 27, 2021
    Dataset provided by
    University of Groningen
    Authors
    Popović, Mladen; Dhali, Maruf A.; Schomaker, Lambert
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Great Isaiah Scroll (1QIsaa) data set for writer identification

    This data set is collected for the ERC project: The Hands that Wrote the Bible: Digital Palaeography and Scribal Culture of the Dead Sea Scrolls PI: Mladen Popović Grant agreement ID: 640497

    Project website: https://cordis.europa.eu/project/id/640497

    Copyright (c) University of Groningen, 2021. All rights reserved. Disclaimer and copyright notice for all data contained on this .tar.gz file:

    1) permission is hereby granted to use the data for research purposes. It is not allowed to distribute this data for commercial purposes.

    2) provider gives no express or implied warranty of any kind, and any implied warranties of merchantability and fitness for purpose are disclaimed.

    3) provider shall not be liable for any direct, indirect, special, incidental, or consequential damages arising out of any use of this data.

    4) the user should refer to the first public article on this data set:

    Popović, M., Dhali, M. A., & Schomaker, L. (2020). Artificial intelligence-based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa). arXiv preprint arXiv:2010.14476.

    BibTeX:

    @article{popovic2020artificial, title={Artificial intelligence based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa)}, author={Popovi{\'c}, Mladen and Dhali, Maruf A and Schomaker, Lambert}, journal={arXiv preprint arXiv:2010.14476}, year={2020} }

    5) the recipient should refrain from proliferating the data set to third parties external to his/her local research group. Please refer interested researchers to this site for obtaining their own copy.

    Organisation of the data:

    The .tar.gz file contains three directories: images, features, and plots. The included 'README' file contains all the instructions.

    The 'images' directory contains NetPBM images of the columns of 1QIsaa. The NetPBM format is chosen because of its simplicity. Additionally, there is no doubt about lossy compression in the processing chain. There are two images for each of the Great Isaiah Scroll columns: one is the direct binarized output from the BiNet (arxiv.org/abs/1911.07930) system, and the other one is the manually cleaned version of the binarized output. The file names for the direct binarized output are of the format '1QIsaa_col.pbm', for example, '1QIsaa_col15.pbm'. And, for the cleaned version, the format is '1QIsaa_col_cleaned.pbm', for example, '1QIsaa_col15_cleaned.pbm'. Note: the image files are not in a separate directory; they will be extracted in the same place. However, due to the unique naming, there is no problem extracting them in one single directory.

    The 'features' directory contains feature files computed for each of the column images. There are two types of feature files: Hinge and Adjoined. They are distinguishable by their extension, for example, '1QIsaa_col15_cleaned.hinge' and '1QIsaa_col15_cleaned.adjoined'. They are also arranged in separate directories for ease of use.

    The 'plots' directory contains a simple python script to perform PCA on the feature files and then visualize them in a 3D plot. The file takes the location of feature files as an input. The 'README_plot' file contains examples of how-to-run in the terminal.

    Brief description: According to ImageMagick's' identify' tool, the original images are in grayscale (.jpg) from Brill collection, in '8-bit Gray 256c'. These images pass through multiple preprocessing measures to become suitable for pattern recognition-based techniques. The first step in preprocessing is the image-binarization technique. In order to prevent any classification of the text-column images based on irrelevant background patterns, a specific binarization technique (BiNet) was applied, keeping the original ink traces intact. After performing the binarization, the images were cleaned further by removing the adjacent columns that partially appear on the target columns' images. Finally, few minor affine transformations and stretching corrections were performed in a restrictive manner. These corrections are also targeted for aligning the texts where the text lines get twisted due to the leather writing surface's degradation. Hence, the clean images are there in the directory along with the direct binarized images. No effort has been made to obtain a balanced set in any way.

    Tools: Binarization: The BiNet tool is available for scientific use upon request (m.a.dhal(at)rug.nl)

    Image Morphing: In the original article, data augmentation was performed using image morphing. The tool is available on GitHub: https://github.com/GrHound/imagemorph.c

    Features for writer identification: Lambert Schomaker http://www.ai.rug.nl/~lambert/allographic-fraglet-codebooks/allographic-fraglet-codebooks.html http://www.ai.rug.nl/~lambert/hinge/hinge-transform.html 1. L. Schomaker & M. Bulacu (2004). Automatic writer identification using connected-component contours and edge-based features of upper-case Western script. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 26(6), June 2004, pp. 787 - 798. 2. Bulacu, M. & Schomaker, L.R.B. (2007). Text-independent Writer Identification and Verification Using Textural and Allographic Features, IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), Special Issue - Biometrics: Progress and Directions, April, 29(4), p. 701-717.

    The features (hinge, fraglets) have been combined in a single MS Windows application, GIWIS, which is available for scientific use upon request (l.r.b.schomaker(at)rug.nl)

    If you have any question, please contact us: Maruf A. Dhali Lambert Schomaker Mladen Popović

    Please cite our papers if you use this data set: 1. Popović, M., Dhali, M. A., & Schomaker, L. (2020). Artificial intelligence based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa). arXiv preprint arXiv:2010.14476. 2. Dhali, M. A., de Wit, J. W., & Schomaker, L. (2019). Binet: Degraded-manuscript binarization in diverse document textures and layouts using deep encoder-decoder networks. arXiv preprint arXiv:1911.07930.

  14. Number of images and training parameters used to train the model.

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuki Kurita; Shiori Meguro; Naoko Tsuyama; Isao Kosugi; Yasunori Enomoto; Hideya Kawasaki; Takashi Uemura; Michio Kimura; Toshihide Iwashita (2023). Number of images and training parameters used to train the model. [Dataset]. http://doi.org/10.1371/journal.pone.0285996.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Yuki Kurita; Shiori Meguro; Naoko Tsuyama; Isao Kosugi; Yasunori Enomoto; Hideya Kawasaki; Takashi Uemura; Michio Kimura; Toshihide Iwashita
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Number of images and training parameters used to train the model.

  15. Comparing the mean age between skin cancer lesions.

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amin Tajerian; Mohsen Kazemian; Mohammad Tajerian; Ava Akhavan Malayeri (2023). Comparing the mean age between skin cancer lesions. [Dataset]. http://doi.org/10.1371/journal.pone.0284437.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Amin Tajerian; Mohsen Kazemian; Mohammad Tajerian; Ava Akhavan Malayeri
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparing the mean age between skin cancer lesions.

  16. R

    Boat Detection Model Dataset

    • universe.roboflow.com
    zip
    Updated Sep 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CNNAerialView (2024). Boat Detection Model Dataset [Dataset]. https://universe.roboflow.com/cnnaerialview/boat-detection-model-zm5ac/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 27, 2024
    Dataset authored and provided by
    CNNAerialView
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Boat Bounding Boxes
    Description

    This dataset consists of a convolutional neural network for the detection of objects such as bathers, boats and buoys.

    The images of this dataset were obtained from the study entitled: "POSEIDON: A Data Augmentation Tool for Small Object Detection Datasets in Maritime Environments", in this link you can download the images and their respective annotations.

    This dataset contains aerial view images, so it would work better in use cases to make inferences with drones, for example.

  17. Medical Imaging Fetal Colorized New Dataset UMRICT

    • kaggle.com
    zip
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuvo Kumar Basak-4004.o (2025). Medical Imaging Fetal Colorized New Dataset UMRICT [Dataset]. https://www.kaggle.com/datasets/shuvokumarbasak2030/medical-imaging-fetal-colorized-new-dataset-umrict/code
    Explore at:
    zip(4661872635 bytes)Available download formats
    Dataset updated
    Mar 20, 2025
    Authors
    Shuvo Kumar Basak-4004.o
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    **Used this dataset in this paper :: ** *****https://doi.org/10.1016/j.mex.2025.103563*****

    The Medical Imaging Fetal Colorized New Dataset UMRICT is a collection of enhanced fetal medical images, including ultrasound and MRI scans, that have been processed using advanced colorization and image enhancement techniques. This dataset aims to support research and development in the field of medical imaging, particularly in improving the visualization and analysis of fetal head structures for diagnostic and educational purposes. https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15408835%2F7ee7f23e60eab8522c40b614c937020e%2Fmotion_image4.gif?generation=1742463180205646&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15408835%2F687895d83ff15e53bd8f2add5d309714%2Fmotion_image1.gif?generation=1742463137866340&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15408835%2F04785dc5c72b0449ba5d4221fe94df0c%2Fmotion_image2.gif?generation=1742463153529375&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15408835%2Ff9160191845bce5356d81578aff24beb%2Fmotion_image3.gif?generation=1742463166931718&alt=media" alt="">

    The dataset includes a diverse set of images from various imaging modalities, enhanced with state-of-the-art methods like color mapping, histogram equalization, edge detection, and more, making it an invaluable resource for medical professionals, researchers, and machine learning practitioners.

    Key Features: Modalities: The dataset contains both ultrasound and MRI scans of fetal heads, offering multi-modal imaging data for comprehensive analysis. Colorization: Grayscale images have been colorized using advanced color maps and techniques to enhance features like boundaries, regions of interest, and anatomical structures. Image Enhancement: Several enhancement methods have been applied to improve image clarity, contrast, and overall quality. These methods include: Adaptive Histogram Equalization Contrast Stretching Gamma Correction Gaussian Blur Edge Detection Random Color Palettes Look-Up Table (LUT) Color Mapping Alpha Blending Heatmap Visualization Segmentation: Some images are processed using segmentation methods to highlight key anatomical structures, which can assist in visualizing and analyzing fetal development. Interactive Tools: The dataset also supports tools for interactive segmentation and visualization, making it suitable for hands-on medical imaging applications. Purpose: The UMRICT dataset is designed to facilitate the study and development of machine learning and image processing models for:

    Fetal Image Analysis: Enhancing the interpretation of fetal scans for better diagnosis and treatment planning. Medical Education: Helping medical professionals and students to learn about fetal anatomy with clearer and more detailed visualizations. AI/ML Model Training: Providing annotated and processed images to train algorithms for segmentation, classification, and feature extraction in medical imaging. Applications: Clinical Diagnostics: Assist doctors in detecting anomalies, such as fetal head abnormalities, by enhancing key features of ultrasound and MRI scans. Image Segmentation & Classification: Use the dataset for training deep learning models to automatically detect and classify fetal features. Medical Image Enhancement: Improve the quality of medical images for better visualization and interpretation. Research: Explore new techniques in image enhancement, segmentation, and the application of machine learning in the field of fetal imaging. Data Structure: The dataset is organized into different folders, each corresponding to a specific image enhancement method or technique used on the original images. These folders include:

    Basic Color Map Adaptive Histogram Equalization Contrast Stretching Gaussian Blur Edge Detection Random Color Palette Gamma Correction LUT Color Map Alpha Blending Heatmap Visualization Interactive Segmentation Each folder contains images that have been processed with the respective technique, allowing users to explore the effects of different enhancement methods on fetal imaging.

    Dataset Format: Image Format: PNG, JPEG, or TIFF (depending on your chosen format for output images) Image Size: Varies based on input images, typically ranging from 256x256 to 1024x1024 pixels. Annotations: Some images may include annotations such as bounding boxes or segmentation masks highlighting key anatomical structures. Source: https://www.kaggle.com/datasets/ankit8467/fetal-head-ultrasound-dataset-for-image-segment https://www.kaggle.com/datasets/vishwaskant786/2d-fetal-altrasound-images https://www.kaggle.com/datasets/vijayachinns/fetail-mri-brain-images-dataset https://www.kaggle.com/datasets/mohammedakheelsb/fetal-h...

  18. G

    Synthetic Medical Imaging Data Platforms Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Medical Imaging Data Platforms Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-medical-imaging-data-platforms-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Oct 7, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Medical Imaging Data Platforms Market Outlook




    According to our latest research, the synthetic medical imaging data platforms market size reached USD 482.3 million in 2024 globally, with a robust CAGR of 32.1% expected during the forecast period. By 2033, the market is projected to achieve a value of USD 4.73 billion. This remarkable growth is driven by the increasing adoption of artificial intelligence (AI) in healthcare, the urgent need for large, diverse, and annotated datasets for algorithm training, and the growing demand for privacy-compliant data solutions. As the industry continues to evolve, synthetic data platforms are becoming essential tools for advancing medical imaging research, improving diagnostic accuracy, and accelerating innovation across clinical and research settings.




    The growth of the synthetic medical imaging data platforms market is propelled by several critical factors, chief among them the exponential rise in AI-driven healthcare applications. AI and machine learning models require vast volumes of high-quality, annotated imaging data to achieve optimal performance. However, real-world medical imaging data is often limited due to privacy regulations, patient consent issues, and the inherent challenges of data sharing between institutions. Synthetic data platforms address these challenges by generating realistic, diverse, and fully anonymized datasets that can be used to train, validate, and benchmark AI algorithms without compromising patient privacy. This capability not only accelerates the development and deployment of AI-powered diagnostics and treatment planning tools but also enables healthcare providers and technology developers to overcome the bottleneck of data scarcity, thereby fueling market expansion.




    Another significant driver is the increasing complexity and heterogeneity of medical imaging modalities. Modern healthcare relies on a wide spectrum of imaging techniques such as CT, MRI, X-ray, ultrasound, and PET scans, each with unique data characteristics and diagnostic value. Synthetic medical imaging data platforms are designed to replicate these diverse modalities, supporting the development of robust AI models capable of handling real-world clinical variability. By enabling the simulation and augmentation of imaging datasets across multiple modalities, these platforms empower researchers and clinicians to explore new diagnostic pathways, refine treatment planning, and conduct rigorous validation studies. This versatility is particularly valuable in rare disease research and in cases where collecting sufficient real-world imaging data is impractical or impossible, thereby further accelerating market growth.




    The market is also benefiting from increasing investment in healthcare digitization and the growing emphasis on precision medicine. Governments, academic institutions, and private sector players are investing heavily in digital health infrastructure, including data management, analytics, and AI development. Synthetic medical imaging data platforms are emerging as a foundational technology in this landscape, enabling secure data sharing, collaborative research, and large-scale clinical trials without infringing on patient confidentiality. The integration of synthetic data solutions into electronic health records (EHRs), clinical decision support systems, and telemedicine platforms is expected to unlock new opportunities for personalized care, early disease detection, and improved patient outcomes, reinforcing the upward trajectory of the market.




    Regionally, the North American market dominates the global landscape, accounting for the largest share in 2024, followed by Europe and Asia Pacific. This leadership is attributed to the presence of advanced healthcare infrastructure, a strong ecosystem of AI startups, and proactive regulatory support for digital health innovation. Meanwhile, Asia Pacific is witnessing the fastest growth, driven by rapid healthcare modernization, expanding research capabilities, and increasing government initiatives to promote AI adoption in healthcare. Europe continues to make significant strides, particularly in academic and collaborative research projects focused on synthetic data. The Middle East & Africa and Latin America are also showing promising potential, albeit from a smaller base, as healthcare providers in these regions increasingly recognize the value of synthetic data in overcoming resource constraints and enhancing clinical capabilities.

    <br /&

  19. f

    Image statistics for parasitised and uninfected categories (after...

    • plos.figshare.com
    xls
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Outlwile Pako Mmileng; Albert Whata; Micheal Olusanya; Siyabonga Mhlongo (2025). Image statistics for parasitised and uninfected categories (after augmentation). [Dataset]. http://doi.org/10.1371/journal.pone.0313734.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Outlwile Pako Mmileng; Albert Whata; Micheal Olusanya; Siyabonga Mhlongo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Image statistics for parasitised and uninfected categories (after augmentation).

  20. MP count and percentage recovery obtained for five spiked MP images.

    • plos.figshare.com
    xls
    Updated Jun 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ho-min Park; Sanghyeon Park; Maria Krishna de Guzman; Ji Yeon Baek; Tanja Cirkovic Velickovic; Arnout Van Messem; Wesley De Neve (2023). MP count and percentage recovery obtained for five spiked MP images. [Dataset]. http://doi.org/10.1371/journal.pone.0269449.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 17, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ho-min Park; Sanghyeon Park; Maria Krishna de Guzman; Ji Yeon Baek; Tanja Cirkovic Velickovic; Arnout Van Messem; Wesley De Neve
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The MP quantity predicted by MP-Net was much closer to the ground truth compared to MP-VAT, MP-VAT 2.0, and C-VAT.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Roboflow Community (2021). Packages Object Detection Dataset - augmented-v1 [Dataset]. https://public.roboflow.com/object-detection/packages-dataset/5
Organization logo

Packages Object Detection Dataset - augmented-v1

Explore at:
zipAvailable download formats
Dataset updated
Jan 14, 2021
Dataset provided by
Roboflowhttps://roboflow.com/
Authors
Roboflow Community
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Variables measured
Bounding Boxes of packages
Description

About This Dataset

The Roboflow Packages dataset is a collection of packages located at the doors of various apartments and homes. Packages are flat envelopes, small boxes, and large boxes. Some images contain multiple annotated packages.

Usage

This dataset may be used as a good starter dataset to track and identify when a package has been delivered to a home. Perhaps you want to know when a package arrives to claim it quickly or prevent package theft.

If you plan to use this dataset and adapt it to your own front door, it is recommended that you capture and add images from the context of your specific camera position. You can easily add images to this dataset via the web UI or via the Roboflow Upload API.

About Roboflow

Roboflow enables teams to build better computer vision models faster. We provide tools for image collection, organization, labeling, preprocessing, augmentation, training and deployment. :fa-spacer: Developers reduce boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

Roboflow Wordmark

Search
Clear search
Close search
Google apps
Main menu