DUTS is a saliency detection dataset containing 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenarios for saliency detection. Accurate pixel-level ground truths are manually annotated by 50 subjects.
https://spdx.org/licenses/https://spdx.org/licenses/
Authors introduce DUTS, a significant contribution to the field of saliency detection, which originally relied on unsupervised computational models with heuristic priors but has recently seen remarkable progress with deep neural networks (DNNs). DUTS is a large-scale dataset comprising 10,553 train images and 5,019 test images. The training images are sourced from the ImageNet DET training/val sets, while the test images are drawn from the ImageNet DET test set and the SUN dataset, encompassing challenging scenarios for salient_object detection. What sets DUTS apart is its meticulously annotated pixel-level ground truths by 50 subjects and the explicit training/test evaluation protocol, making it the largest saliency detection benchmark to date, enabling fair and consistent comparisons in future research endeavors, with the training set serving as an ideal resource for DNN learning and the test set for evaluation purposes.
https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
Dataset Card for DUTS
This is a FiftyOne dataset with 15572 samples.
Installation
If you haven't already, install FiftyOne: pip install -U fiftyone
Usage
import fiftyone as fo import fiftyone.utils.huggingface as fouh
dataset = fouh.load_from_hub("Voxel51/DUTS")
session = fo.launch_app(dataset)
Dataset Details
Dataset Description… See the full description on the dataset page: https://huggingface.co/datasets/Voxel51/DUTS.
chitradrishti/duts dataset hosted on Hugging Face and contributed by the HF Datasets community
This dataset was created by Danny Linn
This dataset was created by Chen Yitong
It contains the following files:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Dust Object Detection V1 is a dataset for object detection tasks - it contains Dust annotations for 572 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Archive Mdsa Dust Train Set is a dataset for object detection tasks - it contains Aerial annotations for 1,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a example of sand dust image for test
This data set contains the data from the Galileo dust detector system (GDDS) from start of mission through the end of mission. Included are the dust impact data, noise data, laboratory calibration data, and location and orientation of the spacecraft and instrument.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Device Under Test (DUT) market is experiencing robust growth, driven by the increasing demand for advanced electronics across various sectors. The market's expansion is fueled by factors such as the proliferation of smartphones, the rise of the Internet of Things (IoT), and the increasing complexity of electronic devices, all necessitating rigorous testing procedures. This necessitates the use of sophisticated DUTs capable of handling diverse testing scenarios and ensuring the quality and reliability of end products. Considering typical market growth in related sectors, let's assume a conservative Compound Annual Growth Rate (CAGR) of 8% for the DUT market. If the market size in 2025 is estimated at $15 billion (a reasonable figure based on related markets), this translates to a projected market value of approximately $26.5 billion by 2033. This growth trajectory reflects the continuous advancements in semiconductor technology, the increasing integration of electronic systems, and the imperative for robust quality control measures within the manufacturing process. Key players like Teradyne, Advantest, and Keysight Technologies are driving innovation in the DUT market through continuous product development and strategic partnerships. However, challenges such as rising research and development costs, the need for specialized expertise to operate advanced DUTs, and potential supply chain disruptions pose constraints to market growth. Nonetheless, the overall outlook for the DUT market remains highly positive, driven by sustained technological advancements and an ever-increasing demand for reliable and high-performing electronic devices across diverse industries such as automotive, aerospace, and healthcare. The segmentation of this market is highly specialized, depending on the type of device under test. Future growth will also depend on factors such as the development and adoption of new testing standards and regulations.
A small-scale real-world dataset containing hazy/dusty industrial images and their clean ground truth counterparts. Designed for evaluating deep learning models for dust removal and image dehazing in industrial environments. Collected and fine-tuned by Moshtaghioun et al., 2025.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
As mentioned in the reference paper:
Dust storms are considered a severe meteorological disaster, especially in arid and semi-arid regions, which is characterized by dust aerosol-filled air and strong winds across an extensive area. Every year, a large number of aerosols are released from dust storms into the atmosphere, manipulating a deleterious impact both on the environment and human lives. Even if an increasing emphasis is being placed on dust storms due to the rapid change in global climate in the last fifty years by utilizing the measurements from the moderate-resolution imaging spectroradiometer (MODIS), the possibility of utilizing MODIS true-color composite images for the task has not been sufficiently discussed yet.
This data publication contains MODIS true-color dust images which are collected through an extensive visual inspection procedure to test the above hypothesis. This dataset includes a subset of the full dataset of RGB images each with visually-recognizable dust storm incidents in high latitude, temporally ranging from 2003 to 2019 over land as well as ocean throughout the world. All RGB images are manually annotated for dust storm detection using CVAT tool such that the dust-susceptible pixel area in the image is masked with (255, 255, 255) in RGB space (white) and the nonsusceptible pixel area is masked with (0, 0, 0) in RGB space (black).
This dataset contains 160 satellite true-colour images and their corresponding ground-truth label bitmaps, organized in two folders: images, and annotations. The associated notebook simply presents the image data visualization, statistical data augmentation and a U-Net-based model to detect dust storms in a semantic segment fashion.
The dataset of true-colour dust images, consisting of airborne dust and weaker dust traces, was collected using MODIS database from an extensive visual inspection procedure. The dataset can be used without additional permissions or fees.
If you use these data in a publication, presentation, or other research product please use the following citation:
N. Bandara, “Ensemble deep learning for automated dust storm detection using satellite images,” in 2022 International Research Conference on Smart Computing and Systems Engineering (SCSE), vol. 5. IEEE, 2022, pp. 178–183.
For interested researchers, please note that the paper is openly accessible at conference proceedings and/or here.
As described here, ``` You are free to: Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. This license is acceptable for Free Cultural Works. The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms: Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. ```
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Columns And Ducts Detection is a dataset for object detection tasks - it contains Columns Ducts annotations for 1,132 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Transfer of SaPIbov1 and SaPIbov5 by dimeric Duts ФD1 and ФNM1.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
The purpose of this dataset is to train a classifier to detect "dusty" versus "not dusty" patches within browse-resolution HiRISE observations of the Martian surface. Here, "dusty" refers to images in which the view of the surface has been obscured heavily by atmospheric dust.
The dataset contains two sets of 20,000 image patches each from EDR (full resolution) and RDR ("browse" resolution) non-map-projected ("nomap") HiRISE images, with balanced classes. The patches have been split into train (n = 10,000), validation (n = 5,000), and test (n = 5,000) sets such that no two patches from the same HiRISE observation appear in more than one of these subsets. There could be some noise in the labels, but a subset of the validation images have been manually vetted so that label noise rates can be estimated. More details on the dataset creation process are described below.
Generating Candidate Images and Patches
To begin constructing the dataset, the paper "The origin, evolution, and trajectory of large dust storms on Mars during Mars years 24–30 (1999–2011)," by Wang and Richardson (2015), was used to compile a set of time ranges for which global or regional dust storms were known to be occurring on Mars. All HiRISE RDR nomap browse images acquired within these time ranges were then inspected manually to determine sets of images that were (1) almost entirely obscured by dust and (2) almost entirely clear of dust. Then, 10,000 patches from the two subsets of images were extracted to form the "dusty" and "not dusty" classes. The extracted patches are 100-by-100 pixels, which roughly corresponds to the width of one CCD channel within the browse image (the width of the raw EDR data products that are stitched together to form a full RDR image). Some small amount of label noise is introduced in this process, since a patch from a mostly dusty image might happen to contain a clear view of the ground, and a patch from a mostly non-dusty image might contain some dust or regions on the surface that are featureless and appear like dusty patches. A set of "vetting labels" is included, which includes human annotations by the author for a subset of the validation set of patches. These labels can be used to estimate the apparent label noise in the dataset.
Corresponding to the RDR patch dataset, a set of patches are extracted from the same set of EDR images for the "dusty" and "not dusty" classes. EDRs are raw images from the instrument that have not been calibrated or stitched together. To provide some form of normalization, EDR patches are only extracted from the lower half of the EDRs, with the upper half being used to perform a basic calibration of the lower half. Basic calibration is done by subtracting the sample (image column) averages from the upper half to remove "striping," then computing the 0.1th and 99.9th percentiles of the remaining values in the upper half and stretching the image patch to 8-bit integer values [0, 255] within that range. The calibration is meant to implement a process that could be performed onboard the spacecraft as the data is being observed (hence, using the top half of the image acquired first to calibrate the lower half of the image which is acquired later). The full resolution EDRs, which are 1024 pixels wide, are resized down to 100-by-100 pixel patches after being extracted so that they roughly match the resolution of the patches from the RDR browse images.
Archive Contents
The compressed archive file contains two top-level directories with similar contents, "edr_nomap_full_resized" and "rdr_nomap_browse." The first directory contains the dataset constructed from EDR data and the second contains the dataset constructed from RDR data.
Within each directory, there are "dusty" and "not_dusty" directories containing the image patches from each class, "manifest.csv," and "vetting_labels.csv." The vetting labels file contains a list of manually labeled examples, along with the original labels to make it easier to compute label noise rates. The "manifest.csv" file contains a list of every example, its label, and whether it belongs to the train, validation, or test set.
An example ID encodes information about where the patch was sampled from the original HiRISE image. As an example from the RDR dataset, the ID "003100_PSP_004440_2125_r4805_c512" can be broken into several parts:
"003100" is a unique numerical ID
"PSP_004440_2125" is the HiRISE observation ID
"r4805_c512" means the patch's upper left corner starts at the 4805th row and 512th column of the original observation
For the EDR dataset, the ID "200000_PSP_004530_1030_RED7_1_r9153" is broken down as follows:
"200000" is a unique numerical ID
"PSP_004530_1030" is the HiRISE observation ID
"RED7" is the CCD ID
"1" is the CCD channel (either 0 or 1)
"r9153" means that the patch is extracted starting at the 9153rd row (since all columns of the 1024-pixel EDR are used, no column is specified; it is implicitly always 0)
Original Data
The original HiRISE EDR and RDR data is available via the Planetary Data System (PDS), hosted at https://hirise-pds.lpl.arizona.edu/PDS/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Nucleotidase activity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Credit report of Duts And Sons Co Ltd South Sudan contains unique and detailed export import market intelligence with it's phone, email, Linkedin and details of each import and export shipment like product, quantity, price, buyer, supplier names, country and date of shipment.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Kerosene Dust Beta is a dataset for object detection tasks - it contains Tss Bubble annotations for 790 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Utility Ducts is a dataset for instance segmentation tasks - it contains Jfudttust Waterpipes Seweragepipes annotations for 563 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
DUTS is a saliency detection dataset containing 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenarios for saliency detection. Accurate pixel-level ground truths are manually annotated by 50 subjects.