Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Explore the Salt and Pepper Noise Dataset, a comprehensive collection of clean and noisy images meticulously crafted for research and development purposes. This dataset comprises two distinct sets: one containing pristine, noise-free images, and the other laden with salt and pepper noise artificially introduced into the visuals. Dive into this dataset to analyze the impact of noise on image processing algorithms, assess denoising techniques, and enhance your understanding of image manipulation in machine learning and computer vision.
Facebook
TwitterDescription
This sound field image dataset contains clean-noisy pairs of complex-valued sound-field images generated by 2D acoustic simulations. The dataset was initially prepared for deep sound-field denoiser (https://github.com/nttcslab/deep-sound-field-denoiser), a DNN-based denoising method for optically measured sound fields. Since the data is a two-dimensional sound field based on the Helmholtz equation, one can use this dataset for any acoustic application. Please check our GitHub repository and paper for details.
Directory structure
The dataset contains three directories: training, validation, and evaluation. Each directory contains "soundsource#" sub-directories (# represents the number of sound sources used in the acoustic simulation). Each sub-directory has three h5 files for data (clean, white noise, and speckle noise) and three CSV files listing random parameter values used in the simulation.
/training
/soundsource#
constants.csv
random_variable_ranges.csv
random_variables.csv
sf_true.h5
sf_noise_white.h5
sf_noise_speckle.h5
Condition of use
This dataset is available under the attached license file. Read the terms and conditions in NTTSoftwareLicenseAgreement.pdf carefully.
Citation
If you use this dataset, please cite the following paper.
K. Ishikawa, D. Takeuchi, N. Harada, and T. Moriya ``Deep sound-field denoiser: optically-measured sound-field denoising using deep neural network,'' arXiv:2304.14923 (2023).
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains high-quality original images and their corresponding synthetically generated noisy variants using 7 common noise types: Gaussian, Speckle, Poisson, Multiplicative, JPEG Compression, Quantization, and Salt & Pepper. It's specifically designed to support the development, training, and benchmarking of deep learning models for image denoising, restoration, and computer vision tasks.
The noisy images were generated using Python and scikit-image, OpenCV, and NumPy, simulating realistic noise patterns that occur in real-world scenarios such as low-light imaging, compression artifacts, sensor defects, and quantization errors.
Ideal for training CNNs like U-Net, DnCNN, RIDNet, or for multi-noise classification tasks.
Each subfolder under noises/ contains synthetically altered images of the same IDs found in original/.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data source: 10xEngineers. Denoising Dataset - Multiple ISO Levels. Kaggle.com. https://www.kaggle.com/datasets/tenxengineers/denoising-dataset-multiple-iso-levels
This project attempt to use Yolov5s to create a image classification model that would detect whether or not a picture has: Noise or No Noise
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Image Noise 3 is a dataset for classification tasks - it contains Noise Clean annotations for 600 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The Smartphone Image Denoising Dataset contains 160 pairs of noisy and ground-truth images captured from multiple smartphones (Google Pixel, iPhone 7, Samsung Galaxy S6 Edge, Nexus 6, and LG G4) under diverse lighting conditions. It is widely used in computational photography, image denoising, and AI research.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset consists of blurred, noisy and defocused images.
Dataset consists of blurred images captured using mobile phones in real-world scenario. Images were captured under wide variety of lighting conditions, weather, indoor and outdoor. This dataset can be used for Image De-noising, Deblurring and noise removal algorithms. This can be also work as robust test set for denoising algorithms.
The images in this dataset are exclusively owned by Data Cluster Labs and were not downloaded from the internet. To access a larger portion of the training dataset for research and commercial purposes, a license can be purchased. Contact us at sales@datacluster.ai Visit www.datacluster.ai to know more.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Real Image With Noise is a dataset for classification tasks - it contains Tumor annotations for 275 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Due to the scarcity of suitable image datasets online related to low-quality images, we created a new dataset specifically for this purpose. The dataset can be used to develop or train models aimed at improving image quality, or serve as a benchmark dataset for evaluating the performance of computer vision on low-quality images. The image image processing code in this dataset is available at https://github.com/pochih-code/Low-quality-image-dataset
Low-quality image dataset is based on the MS COCO 2017 validation images, with images processed into four categories, including lossy compression, image intensity, image noise and image blur. In total, the dataset comprises 100,000 processed images and is modified by humans to ensure that images are valid in the real world.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
If you find this dataset useful, please star our repo and please cite the following works. Thank you.
Chaudhary, Shivesh, Sihoon Moon, and Hang Lu. "Fast, efficient, and accurate neuro-imaging denoising via supervised deep learning." Nature communications 13.1 (2022): 5165.
TL;DR - a collection >5,000 of paired noisy and clean images to build deep learning denoising algorithms. Checkout getting_started notebook to quickly start training.
This is one of the largest dataset (>5000 images) of real low and high SNR images acquired using a confocal fluorescence microscope across three different cellular morphologies and labelling. Multiple noisy images corresponding to the same sample are also available thus the dataset can be used for building both supervised CARE, NIDDL, and unsupervised N2N, N2V methods. With this dataset we hope to drive development of new algorithms for image denoising.
Fluorescence microscopy is an indispensable tool for biological discovery. But often scientist are only able to acquire noisy images. This is because of the imaging constraints of their experiments. E.g. whole-brain recording of neuron activities in C. elegans requires setting small exposure times and low lasers powers to perform volumetric imaging at high speed without causing photobleaching of fluorophores. The result is noisy images.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F27405803%2F6ab3e9cdefdbbd1b26f4eb10297ce9fa%2Fwb_denoising.gif?generation=1750617078052388&alt=media" alt="">
Figure 1: Pan neuronal labelled head ganglion of C. elegans. Example shows noisy images and denoised images obtained by the baseline method.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F27405803%2Fc9779c8f440924d79921a60dffcc8d7f%2Fneurite_denoising.gif?generation=1750619072029511&alt=media" alt="">
Figure 2 Neurites of the mechanosensory neuron PVD in C. elegans. Example shows noisy images and denoised images obtained by the baseline method.
Data is available for 3 different kinds of cellular structures to test generalizability of algorithms across different kind morphologies
Data is available for multiple signal-to-noise (SNR) levels to test limitations/capacities of algorithms across different amount of noises
dataset_20210226_denoising_ZIM504.h5, noisy and clean images were acquired using laser powers of 110 and 1000 settings at the microscope. dataset_20210604_denoising_ZIM504.h5, noisy and clean images were acquired using laser powers of 75 and 1000 settings at the microscope. For ventral-nerve datasets and PVD datasets, multiple noisy images for the same sample are present in the .5 files.
20210710_denoising_PVD_array.h5 has two noisy images with keys noisy_1 and noisy_2 that were acquired at laser power settings of 200 and 400. Clean images are present under clean key and were acquired at laser power setting of 1000.noisy_1 and noisy_2 image pairs)Both 3D image stacks (whole-brain) and 2D images (ventral-nerve and PVD-neurite) sample are present. Thus researcher can try exploring both 2D, 2.5D and 3D CNN denoising models
Please check out simple baseline method NIDDL
**If you find this dataset useful, please star our repo and please cite the following works. Th...
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Denoising Image Dataset is a collection of image data, captured using two sensors (IMX335, SC2235) for the purpose of evaluating the performance of denoising algorithms. A total of 50 scenes were captured using AlphaISP (IMX335), which has two types of noise: Bayer noise and 2DNR noise. Using BetaISP (SC2235), captured 62 scenes, 48 of them with ground truths, and the remaining 14 without ground truths but at multiple ISOs. This dataset is suitable for researchers and developers in the fields of image processing, computer vision, and machine learning interested in developing and testing image denoising algorithms.
10xEngineers. Denoising Dataset - Multiple ISO Levels. Kaggle.com. https://www.kaggle.com/datasets/tenxengineers/denoising-dataset-multiple-iso-levels
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Noise can significantly impact the effectiveness of video processing algorithms. This paper proposes a fast white-noise variance estimation that is reliable even in images with large textured areas. This method finds intensity-homogeneous blocks first and then estimates the noise variance in these blocks, taking image structure into account. This paper proposes a new measure to determine homogeneous blocks and a new structure analyzer for rejecting blocks with structure. This analyzer is based on high-pass operators and special masks for corners to stabilize the homogeneity estimation. For typical video quality (PSNR of 20–40 dB), the proposed method outperforms other methods significantly and the worst-case estimation error is 3 dB, which is suitable for real applications such as video broadcasts. The method performs well both in highly noisy and good-quality images. It also works well in images including few uniform blocks.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Several IQA algorithm comparison on the blur and noise image dataset.
Facebook
Twitterhttps://images.cv/licensehttps://images.cv/license
Labeled Noisy friarbird images suitable for training and evaluating computer vision and deep learning models.
Facebook
Twitterhttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/DMB2QKhttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/DMB2QK
Although learning-based image restoration methods have made significant progress, they still struggle with limited generalization to real-world scenarios due to the substantial domain gap caused by training on synthetic data. Existing methods address this issue by improving data synthesis pipelines, estimating degradation kernels, employing deep internal learning, and performing domain adaptation and regularization. Previous domain adaptation methods have sought to bridge the domain gap by learning domain-invariant knowledge in either feature or pixel space. However, these techniques often struggle to extend to low-level vision tasks within a stable and compact framework. In this paper, we show that it is possible to perform domain adaptation via the noise space using diffusion models. In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss that guides the restoration model in progressively aligning both restored synthetic and real-world outputs with a target clean distribution. We refer to this method as denoising as adaptation. To prevent shortcuts during joint training, we present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model. They implicitly blur the boundaries between conditioned synthetic and real data and prevent the reliance of the model on easily distinguishable features. Experimental results on three classical image restoration tasks, namely denoising, deblurring, and deraining, demonstrate the effectiveness of the proposed method.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The zip file contains Dataset to replicate experiments in "Image Feature Learning with Genetic Programming" paper published at the PPSN 2020 conference.
This package also contains a version of Lenet5 to classify the MNIST digits.
MNIST dataset has been corrupted with salt noise. We made white a different proportion of the pixel at random (5% 10% 30% 40%).
Facebook
TwitterRAID (Responses to Affine Image Distortions) is a perceptual image quality database built from human judgments. Unlike traditional databases focused on digital distortions, RAID investigates suprathreshold affine transformations — rotation, translation, scaling, and Gaussian noise — which are more representative of distortions encountered in natural viewing conditions.
Subjective perceptual scales were collected using the psychophysical method Maximum Likelihood Difference Scaling (MLDS). Over 40,000 image comparisons were performed by 210 human observers under controlled laboratory conditions.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset was created by Priyadharshan M
Released under CC BY-NC-SA 4.0
Facebook
TwitterThe Urban100 dataset is a benchmark for image denoising, containing 100 images with varying levels of noise.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Explore the Salt and Pepper Noise Dataset, a comprehensive collection of clean and noisy images meticulously crafted for research and development purposes. This dataset comprises two distinct sets: one containing pristine, noise-free images, and the other laden with salt and pepper noise artificially introduced into the visuals. Dive into this dataset to analyze the impact of noise on image processing algorithms, assess denoising techniques, and enhance your understanding of image manipulation in machine learning and computer vision.