CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The DIV2K dataset is a large-scale benchmark designed for image super-resolution research and development, widely used in NTIRE and PIRM challenges. It contains 1,000 high-resolution RGB images divided into training and validation sets. The training set provides 800 high-quality images along with downsampled versions at scaling factors ×2, ×3, and ×4. The validation set includes 100 images, with both low-resolution and high-resolution versions released at different challenge phases. This dataset is ideal for developing, training, and benchmarking super-resolution and image restoration algorithms.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
yangtao9009/DIV2K dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
DIV2K is a dataset for object detection tasks - it contains DIV2K annotations for 800 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Single Image Super-Resolution (SR) aims to generate a High Resolution (HR) image I SR from a low resolution (LR) im-age I LR such that it is similar to original HR image I HR. SR has seen a lot of interest recently because it is: (i) inherently an ill-posed inverse problem; and (ii) an important low level vision problem having many applications.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Mustafa Al-Khafaji95
Released under Apache 2.0
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
A Dataset combined of DIV2K and Flickr2K. This dataset contains: - Training set: 3450 pairs - Valid set: 100 pairs - LR images are obtained by using Bicubic and Unknown methods with 3 scales: x2, x3, x4
The dataset used in the paper for neural image compression.
The dataset used in the paper is a combination of the CLIC intra coding challenge 2021, the TECNICK dataset, and the DIV2K dataset.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/DKSPJFhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/DKSPJF
This dataset contains image patches used to train deep networks for super-resolution reconstruction, used for the experiments reported in our paper: D. Kostrzewa, S. Piechaczek, K. Hrynczenko, P. Benecki, J. Nalepa, and M. Kawulok: "Super-resolution reconstruction using deep learning: should we go deeper?," in Proc. BDAS 2019, Communications in Computer and Information Science, Springer, 2019. The data are split into training and validation sets, containing 12,800 and 1600 patches, respectively. Every patch is of size 224x224 pixels (high resolution), coupled with a low resolution patch (112x112 pixels). The patches were extracted from the publicly available DIV2K dataset (https://data.vision.ee.ethz.ch/cvl/DIV2K).
This dataset was created by yyyang
This dataset was created by Yash Bansal
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example datasets of the Interactive Feature Localization in Deep neural networks (IFeaLiD) tool.
Cityscapes
These datasets are based on the image bielefeld_000000_007186_leftImg8bit.png
of the Cityscapes dataset. The datasets can be explored online in IFeaLiD:
bielefeld_000000_007186_leftImg8bit.png.C1.npz.8.zip
)bielefeld_000000_007186_leftImg8bit.png.C2.npz.8.zip
)bielefeld_000000_007186_leftImg8bit.png.C3.npz.8.zip
)COCO
These datasets are based on the image 000000015746.jpg
of the COCO dataset. The datasets can be explored online in IFeaLiD:
000000015746.jpg.C1.npz.8.zip
)000000015746.jpg.C2.npz.8.zip
)000000015746.jpg.C3.npz.8.zip
)DIV2K
These datasets are based on the image 0804.png
of the DIV2K dataset. The datasets can be explored online in IFeaLiD:
DOTA
These datasets are based on the image P0034.png
of the DOTA dataset. The datasets can be explored online in IFeaLiD:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains image patches used to train deep networks for super-resolution reconstruction, used for the experiments reported in our IGARSS 2019 paper: M. Kawulok, S. Piechaczek, K. Hrynczenko, P. Benecki, D. Kostrzewa, J. Nalepa: "On training deep networks for satellite image super-resolution," in Proc. IGARSS 2019. The data are split into training and validation sets as described in Table 1 in our paper. Low-resolution patches have been obtained from high-resolution ones following 8 different scenarios - all of them are included in the dataset. The dataset is split into three files due to technical reasons: (i) DIV2K high-resolution patches, (ii) DIV2K low-resolution patches (8 versions), and (iii) Sentinel-2 patches (low- and high-resolution).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The research aims to investigate the information capacity of phase-only computer-generated holograms (CGH) quantized to certain bit depth level. Therefore, research data has been generated from the quantized CGH process for 800 target images sourced from the DIV2K Dataset (https://data.vision.ee.ethz.ch/cvl/DIV2K/). The research data contains target images' entropy and delentropy, computer-generated holograms' bit depth and entropy, and the normalised mean squared error (NMSE) between the reconstruction and target image. There are two files in total, where one contains the results for target images set at far field (Fraunhofer region) and the other contains the results for target images set at near field (Fresnel region). Each row in the csv file is corresponded to the result of one instance of a CGH algorithm run.
This dataset was created by 颀周
It contains the following files:
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
A collection of raw images from DIV2K, Flicker2K and OST datasets. Please refer here for details.
Citation
@inproceedings{agustsson2017ntire, title={Ntire 2017 challenge on single image super-resolution: Dataset and study}, author={Agustsson, Eirikur and Timofte, Radu}, booktitle={CVPRW}, year={2017} }
@InProceedings{Lim_2017_CVPR_Workshops, author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu}, title = {Enhanced Deep Residual… See the full description on the dataset page: https://huggingface.co/datasets/Iceclear/DF2K-OST.
This dataset was created by yyyang
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The DIV2K dataset is a large-scale benchmark designed for image super-resolution research and development, widely used in NTIRE and PIRM challenges. It contains 1,000 high-resolution RGB images divided into training and validation sets. The training set provides 800 high-quality images along with downsampled versions at scaling factors ×2, ×3, and ×4. The validation set includes 100 images, with both low-resolution and high-resolution versions released at different challenge phases. This dataset is ideal for developing, training, and benchmarking super-resolution and image restoration algorithms.