Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The size of the Binary Masks market was valued at USD XXX million in 2024 and is projected to reach USD XXX million by 2033, with an expected CAGR of XX% during the forecast period.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Binary Mask Reticle market is booming, projected to reach $3.64 billion by 2033 with a 5.6% CAGR. Driven by semiconductor & display advancements, this report analyzes market size, trends, key players (Photronics, Toppan Photomasks), and regional breakdowns. Discover key insights and future projections for this high-growth sector.
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Binary Masks market has emerged as a pivotal segment within the broader technology and manufacturing industries, primarily serving as a solution for various applications in photolithography and image processing. Binary Masks, which utilize a black-and-white pattern to control light transmission, are essential in
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes Geotiff raster files representing land fractions and derived binary masks indicating various land and marine categories of Australia, excluding Cocos, Christmas, Norfolk, Macquarie, Heard and McDonald Islands, as well as Antarctica. The land fraction raster provides the proportion of land within each cell, while the masks delineate areas based on specific land and marine criteria: any land, majority land, full land, any marine, majority marine, and full marine. These data are essential for spatial analysis and environmental studies focused on coastal and marine regions. Lineage: The dataset begins with the creation of a land fraction raster, which is derived from the 2021 Australian (AUS) boundary (https://www.abs.gov.au/statistics/standards/australian-statistical-geography-standard-asgs-edition-3/jul2021-jun2026), the 2017 GA Smartline coastline data (https://pid.geoscience.gov.au/dataset/ga/104160), and the 2023 OpenStreetMap (OSM) coastline data (https://osmdata.openstreetmap.de/data/land-polygons.html). These sources provide comprehensive spatial data that are integrated to calculate the proportion of land for each pixel, with values ranging from 0 (entirely marine) to 1 (entirely land).
Once the land fraction raster is established, it serves as the basis for generating a series of binary masks. These masks categorize the data into specific land and marine classifications: • Land_full: Pixels that are completely land (land fraction = 1). • Land_majority: Pixels where land occupies more than half of the area (land fraction > 0.5). • Land_any: Pixels with any proportion of land (land fraction > 0). • Marine_full: Pixels that are entirely marine (land fraction = 0). • Marine_majority: Pixels where marine areas predominate (land fraction ≤ 0.5). • Marine_any: Pixels that are not entirely land (land fraction < 1).
The Geoscience Australia (GA) DEA Collection 3 grid was employed as the baseline for creating the spatial grid. This grid is anchored at a south-west origin point with coordinates (−6912000.0, −4416000.0) in the EPSG:3577 (GDA94 / Australian Albers) coordinate reference system. The dataset includes both 100-meter and 250-meter resolution data in GDA94 / Australian Albers (EPSG:3577), catering to the needs of different users and applications.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Binary logistic regression analysis on the influencing factors of health prevention behaviors of take off mask.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Binary logistic regression analysis on the influencing factors of health prevention behaviors of hand hygiene.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
This dataset comprises cloud masks for 513 1022-by-1022 pixel subscenes, at 20m resolution, sampled random from the 2018 Level-1C Sentinel-2 archive. The design of this dataset follows from some observations about cloud masking: (i) performance over an entire product is highly correlated, thus subscenes provide more value per-pixel than full scenes, (ii) current cloud masking datasets often focus on specific regions, or hand-select the products used, which introduces a bias into the dataset that is not representative of the real-world data, (iii) cloud mask performance appears to be highly correlated to surface type and cloud structure, so testing should include analysis of failure modes in relation to these variables.
The data was annotated semi-automatically, using the IRIS toolkit, which allows users to dynamically train a Random Forest (implemented using LightGBM), speeding up annotations by iteratively improving it's predictions, but preserving the annotator's ability to make final manual changes when needed. This hybrid approach allowed us to process many more masks than would have been possible manually, which we felt was vital in creating a large enough dataset to approximate the statistics of the whole Sentinel-2 archive.
In addition to the pixel-wise, 3 class (CLEAR, CLOUD, CLOUD_SHADOW) segmentation masks, we also provide users with binary classification "tags" for each subscene that can be used in testing to determine performance in specific circumstances. These include:
SURFACE TYPE: 11 categories
CLOUD TYPE: 7 categories
CLOUD HEIGHT: low, high
CLOUD THICKNESS: thin, thick
CLOUD EXTENT: isolated, extended
Wherever practical, cloud shadows were also annotated, however this was sometimes not possible due to high-relief terrain, or large ambiguities. In total, 424 were marked with shadows (if present), and 89 have shadows that were not annotatable due to very ambiguous shadow boundaries, or terrain that cast significant shadows. If users wish to train an algorithm specifically for cloud shadow masks, we advise them to remove those 89 images for which shadow was not possible, however, bear in mind that this will systematically reduce the difficulty of the shadow class compared to real-world use, as these contain the most difficult shadow examples.
In addition to the 20m sampled subscenes and masks, we also provide users with shapefiles that define the boundary of the mask on the original Sentinel-2 scene. If users wish to retrieve the L1C bands at their original resolutions, they can use these to do so.
Please see the README for further details on the dataset structure and more.
Contributions & Acknowledgements
The data were collected, annotated, checked, formatted and published by Alistair Francis and John Mrziglod.
Support and advice was provided by Prof. Jan-Peter Muller and Dr. Panagiotis Sidiropoulos, for which we are grateful.
We would like to extend our thanks to Dr. Pierre-Philippe Mathieu and the rest of the team at ESA PhiLab, who provided the environment in which this project was conceived, and continued to give technical support throughout.
Finally, we thank the ESA Network of Resources for sponsoring this project by providing ICT resources.
Facebook
TwitterThe Earth Surface Mineral Dust Source Investigation (EMIT) instrument measures surface mineralogy, targeting the Earth’s arid dust source regions. EMIT is installed on the International Space Station (ISS) and uses imaging spectroscopy to take mineralogical measurements of sunlit regions of interest between 52° N latitude and 52° S latitude. An interactive map showing the regions being investigated, current and forecasted data coverage, and additional data resources can be found on the VSWIR Imaging Spectroscopy Interface for Open Science (VISIONS) EMIT Open Data Portal.The EMIT Level 2A Estimated Surface Reflectance and Uncertainty and Masks (EMITL2ARFL) Version 1 data product provides surface reflectance data in a spatially raw, non-orthocorrected format. Each EMITL2ARFL granule consists of three Network Common Data Format 4 (NetCDF4) files at a spatial resolution of 60 meters (m): Reflectance (EMIT_L2A_RFL), Reflectance Uncertainty (EMIT_L2A_RFLUNCERT), and Reflectance Mask (EMIT_L2A_MASK). The Reflectance file contains surface reflectance maps of 285 bands with a spectral range of 381-2493 nanometers (nm) at a spectral resolution of ~7.5 nm, which are held within a single science dataset layer (SDS). The Reflectance Uncertainty file contains uncertainty estimates about the reflectance captured as per-pixel, per-band, posterior standard deviations. The Reflectance Mask file contains six binary flag bands and two data bands. The binary flag bands identify the presence of features including clouds, water, and spacecraft which indicate if a pixel should be excluded from analysis. The data bands contain estimates of aerosol optical depth (AOD) and water vapor.Each NetCDF4 file holds a location group containing a geometric lookup table (GLT) which is an orthorectified image that provides relative x and y reference locations from the raw scene to allow for projection of the data. Along with the GLT layers, the files will also contain latitude, longitude, and elevation layers. The latitude and longitude coordinates are presented using the World Geodetic System (WGS84) ellipsoid. The elevation data was obtained from Shuttle Radar Topography Mission v3 (SRTM v3) data and resampled to EMIT’s spatial resolution.Each granule is approximately 75 kilometers (km) by 75 km, nominal at the equator, with some granules at the end of an orbit segment reaching 150 km in length.Known Issues: Data acquisition gap: From September 13, 2022, through January 6, 2023, a power issue outside of EMIT caused a pause in operations. Due to this shutdown, no data were acquired during that timeframe. Possible Reflectance Discrepancies: Due to changes in computational architecture, EMITL2ARFL reflectance data produced after December 4, 2024, with Software Build 010621 and onward may show discrepancies in reflectance of up to 0.8% in extreme cases in some wavelengths as compared to values in previously processed data. These discrepancies are generally lower than 0.8% and well within estimated uncertainties. Between earlier builds and Build 010621, neither resulting output should be interpreted as more ‘correct’ than the other, as their results are simply convergence differences from an optimization search. Most users are unlikely to observe the impact.
Facebook
TwitterThe Earth Surface Mineral Dust Source Investigation (EMIT) instrument measures surface mineralogy, targeting the Earth’s arid dust source regions. EMIT is installed on the International Space Station (ISS) and uses imaging spectroscopy to take mineralogical measurements of sunlit regions of interest between 52° N latitude and 52° S latitude. An interactive map showing the regions being investigated, current and forecasted data coverage, and additional data resources can be found on the VSWIR Imaging Spectroscopy Interface for Open Science (VISIONS) EMIT Open Data Portal.The EMIT Level 2A Estimated Surface Reflectance and Uncertainty and Masks (EMITL2ARFL) Version 1 data product provides surface reflectance data in a spatially raw, non-orthocorrected format. Each EMITL2ARFL granule consists of three Network Common Data Format 4 (NetCDF4) files at a spatial resolution of 60 meters (m): Reflectance (EMIT_L2A_RFL), Reflectance Uncertainty (EMIT_L2A_RFLUNCERT), and Reflectance Mask (EMIT_L2A_MASK). The Reflectance file contains surface reflectance maps of 285 bands with a spectral range of 381-2493 nanometers (nm) at a spectral resolution of ~7.5 nm, which are held within a single science dataset layer (SDS). The Reflectance Uncertainty file contains uncertainty estimates about the reflectance captured as per-pixel, per-band, posterior standard deviations. The Reflectance Mask file contains six binary flag bands and two data bands. The binary flag bands identify the presence of features including clouds, water, and spacecraft which indicate if a pixel should be excluded from analysis. The data bands contain estimates of aerosol optical depth (AOD) and water vapor.Each NetCDF4 file holds a location group containing a geometric lookup table (GLT) which is an orthorectified image that provides relative x and y reference locations from the raw scene to allow for projection of the data. Along with the GLT layers, the files will also contain latitude, longitude, and elevation layers. The latitude and longitude coordinates are presented using the World Geodetic System (WGS84) ellipsoid. The elevation data was obtained from Shuttle Radar Topography Mission v3 (SRTM v3) data and resampled to EMIT’s spatial resolution.Each granule is approximately 75 kilometers (km) by 75 km, nominal at the equator, with some granules at the end of an orbit segment reaching 150 km in length.Known Issues: Data acquisition gap: From September 13, 2022, through January 6, 2023, a power issue outside of EMIT caused a pause in operations. Due to this shutdown, no data were acquired during that timeframe. Possible Reflectance Discrepancies: Due to changes in computational architecture, EMITL2ARFL reflectance data produced after December 4, 2024, with Software Build 010621 and onward may show discrepancies in reflectance of up to 0.8% in extreme cases in some wavelengths as compared to values in previously processed data. These discrepancies are generally lower than 0.8% and well within estimated uncertainties. Between earlier builds and Build 010621, neither resulting output should be interpreted as more ‘correct’ than the other, as their results are simply convergence differences from an optimization search. Most users are unlikely to observe the impact.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Train:
2000 lesion images in JPEG format and 2000 corresponding superpixel masks in PNG format, with EXIF data stripped.
2000 binary mask images in PNG format.
Validation: 150 images + 150 masks
Test: 600 images + 600 masks
The ISIC 2017: Part 1 - Lesion Segmentation dataset is specifically designed for a semantic segmentation task focused on dermatology. Comprising 2750 images, each image in the dataset is associated with 1 single class, namely lesion. The primary objective of this dataset is to challenge participants to generate automated predictions of lesion segmentation boundaries from dermoscopic images. Each image is accompanied by expert manual tracings of lesion boundaries represented as binary masks, providing a ground truth for the segmentation task. This dataset serves as a valuable resource for advancing the development and evaluation of algorithms in the field of dermatological image analysis.
About ISIC 2017 Challenge The International Skin Imaging Collaboration (ISIC) has begun to aggregate a large-scale publicly accessible dataset of dermoscopy images. Currently, the dataset houses more than 20,000 images from leading clinical centers internationally, acquired from a variety of devices used at each center. The ISIC dataset was the foundation for the first public benchmark challenge on dermoscopic image analysis in 2016.
About Lesion Segmentation Participants were asked to submit automated predictions of lesion segmentations from dermoscopic images in the form of binary masks. Lesion segmentation training data included the original image, paired with the expert manual tracing of the lesion boundaries also in the form of a binary mask, where pixel values of 255 were considered inside the area of the lesion, and pixel values of 0 were outside.
ISIC 2017: Part 1 - Lesion Segmentation is a dataset for a semantic segmentation task. It is used in the medical industry.
The dataset consists of 2750 images with 2750 labeled objects belonging to 1 single class (lesion).
Images in the ISIC 2017: Part 1 - Lesion Segmentation dataset have pixel-level semantic segmentation annotations. All images are labeled (i.e. with annotations). There are 3 splits in the dataset: train (2000 images), test (600 images), and valid (150 images).
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Study Objective: Facemask use is associated with reduced transmission of SARS-CoV-2. Most surveys assessing perceptions and practices of mask use miss the most vulnerable racial, ethnic, and socio-economic populations. These same populations have suffered disproportionate impacts from the pandemic. The purpose of this study was to assess beliefs, access, and practices of mask wearing across 15 urban emergency department (ED) populations. Methods: This was a secondary analysis of a cross-sectional study of ED patients from December 2020 to March 2021 at 15 geographically diverse, safety net EDs across the US. The primary outcome was frequency of mask use outside the home and around others. Other outcome measures included having enough masks and difficulty obtaining them. Results: Of 2,575 patients approached, 2,301 (89%) agreed to participate; nine had missing data pertaining to the primary outcome, leaving 2,292 included in the final analysis. A total of 79% of respondents reported wearing masks “all of the time” and 96% reported wearing masks over half the time. Subjects with PCPs were more likely to report wearing masks over half the time compared to those without PCPs (97% vs 92%). Individuals experiencing homelessness were less likely to wear a mask over half the time compared to those who were housed (81% vs 96%). Conclusions: Study participants reported high rates of facemask use. Respondents who did not have PCPs and those who were homeless were less likely to report wearing a mask over half the time and more likely to report barriers in obtaining masks. The ED may serve a critical role in education regarding, and provision of, masks for vulnerable populations. Methods Study Design and Setting We conducted this secondary analysis of a previously published study regarding ED patients perceptions’ of COVID-19 vaccination.[13] The parent study was a prospective, cross-sectional survey of ED patients at 15 safety net EDs in 14 US cities. The University of California Institutional Review Board approved this study. Verbal consent was obtained. Data Processing Participant ethnicity (Latinx/non-Latinx) and race were self-reported. We categorized those who self-identified as any race other than Latinx as ‘reported race’, non-Latinx (i.e. Black, non-Latinx and White, non-Latinx). If the patient identified themselves as Latinx, they were placed in that category and not in that of any other race. If an individual identified as more than one non-Latinx race, they were categorized as multiracial. Individuals who reported that they were currently applying for health insurance, were unsure if they were insured, or if their response to the question was missing (18 respondents) were categorized as uninsured in a binary variable, and separate analysis was done based on type of insurance reported. The survey submitted in our supplement (S1) is the version used at the lead site. Each of the remaining sites revised their survey to include wording applicable to their community (i.e., the site in Los Angeles changed Healthy San Francisco to Healthy Los Angeles), and these local community health plans were coded together. We identified individuals who reported English and Spanish as their primary language, and grouped those who reported Arabic, Bengali, Cantonese, Tagalog, or Other as “Other” primary language. With regards to gender, we categorized those who identified as gender queer, nonbinary, trans man and trans woman as “other”. Study Outcomes and Key Variables Our primary outcome was subjects’ response to the question, “Do you wear a mask when you are outside of your home when you are around other people?” with answer choices a) always, b) most of the time (more than 50%), c) sometimes, but less than half of the time (less than 50%), and d) I never wear a mask. Respondents were provided with these percentages to help quantify their responses. We stratified respondents into two groups: those who responded always or most of the time as “wears masks over half the time” and those who responded sometimes or never as “wears masks less than half the time. We sorted each of the 15 sites into four geographic regions within the United States. There were 3 sites located in New Jersey, Massachusetts, and Pennsylvania which we categorized in the Northeast region. We categorized 3 sites in Michigan and Iowa as Midwest, and 3 sites in North Carolina, Louisiana, and Maryland as the South. There were 6 sites located on the West Coast from California and Washington State.
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
The “Deep Learning Consensus-based Annotation of Vestibular Schwannoma from Magnetic Resonance Imaging: An Annotated Multi-Center Routine Clinical Dataset” (Vestibular-Schwannoma-MC-RC-2) comprises 190 adult patients with unilateral vestibular schwannoma (VS), referred to King’s College Hospital, London, UK. Patients with neurofibromatosis type 2 (NF2) were excluded. Each patient has 1–8 longitudinal scans acquired from 2010 onwards, totaling 543 contrast-enhanced T1-weighted (T1CE) scans and 133 T2 scans across 621 time points. The dataset provides binary VS segmentations for 534 T1CE scans, along with demographic data (sex, ethnicity, age) and clinical decisions recorded at each time point. Segmentations were created using an iterative, consensus-based deep learning approach. This resource supports research on automated VS surveillance, tumour segmentation, longitudinal growth modeling, and clinical decision support.
The Vestibular-Schwannoma-MC-RC-2 dataset is a comprehensive longitudinal collection of Magnetic Resonance Imaging (MRI) scans focused on VS. It includes detailed binary segmentations for each visible tumour on T1CE, facilitating the development and validation of segmentation and progression pattern analysis of VS.
The dataset comprises MRI scans from 190 patients referred to King's College Hospital, London, UK, sourced from over 15 hospitals across Southeast England. All patients are over 18 years old and have been diagnosed with unilateral vestibular schwannoma. Patients with neurofibromatosis type 2 (NF2) have been excluded from this dataset. Patients with the other coexisting tumours were excluded.
This dataset is crucial for enhancing reproducibility in research on VS. By providing comprehensive and routine clinical imaging data from multiple hospitals, it allows researchers to validate their findings across different clinical settings and imaging protocols. This is essential for confirming the robustness of automated VS tools.
The dataset addresses significant gaps in existing VS datasets by including longitudinal data with up to eight time points per patient, compared to our previously published Vestibular-Schwannoma-MC-RC dataset with fewer time points. This longitudinal aspect enables the assessment of tumour progression and patterns, fulfilling a critical clinical need for continuous routine monitoring of vestibular schwannomas, despite the treatments patients undergo. Additionally, the clinical data provided in this dataset enable more comprehensive analyses by correlating imaging findings with patient demographics and clinical decisions.
While the Vestibular-Schwannoma-MC-RC dataset primarily consists of T2-weighted scans, the Vestibular-Schwannoma-MC-RC-2 dataset focuses on T1 contrast-enhanced scans. This distinction allows researchers to explore different imaging modalities and their impact on tumour detection and progression. Additionally, the dataset includes scans from a different region of the UK compared to the Vestibular-Schwannoma-MC-RC dataset, which enhances the diversity and generalizability of the vestibular schwannoma data. Vestibular-Schwannoma-MC-RC2 dataset does not overlap with our previously published datasets.
The following subsections provide information about how the data were selected, acquired and prepared for publication.
The dataset comprises longitudinal MRI scans from patients with unilateral sporadic VS, collected from over 15 medical sites across South East England, United Kingdom. A total of 226 patients were referred to the skull base clinic at King's College Hospital, London, where they underwent initial management between August 2008 and November 2012. Eligible participants were adult patients, aged 18 years or older, with a single unilateral VS. This included patients with prior surgical or radiation treatment but individuals with Neurofibromatosis type 2 (NF2) related schwannomatosis were excluded from the study.
All patients with MRI scans available for at least one time point were included in the study. Scans showing other tumours and those covering non-brain regions (e.g., neck) were excluded. Additionally, images with a slice thickness greater than 3.5 mm were excluded due to reduced sensitivity to small lesions and the impact of partial volume effects, which hinder accurate delineation and volumetric analysis of VS.
The data were collected across multiple scans performed during routine clinical surveillance. To ensure reproducibility and transparency, MRI acquisition parameters are provided separately and grouped into the following categories:
Demographics and clinical information:
The demographics and clinical data captures essential patient information and relevant standards for data collection. For each MRI time point, the following are recorded:
This structured clinical information allows longitudinal tracking of patient outcomes and management strategies.
The final curated dataset includes 190 patients, each with 1–8 longitudinal scans acquired from 2010 onwards, totaling 543 contrast-enhanced T1-weighted (T1CE) scans, 481 T1-weighted scans and 133 T2-weighted scans across 621 time points (mean 3.25 scans per patient, mean monitoring period 4.83 ± 3.08 years). All scan dates were uniformly shifted for privacy, with consistent offsets applied within each patient’s imaging series. Binary VS segmentations are provided for 534 T1CE scans; masks are not included for 9 post-operative scans with no visible residual tumour. Supporting data include demographics (sex, ethnicity, age) and clinical decisions documented at each time point.
After converting the original DICOM files to NIfTI format (.nii.gz), the following steps were applied to deface the patient scans.
The defacing pipeline repository: https://github.com/cai4cai/defacing_pipeline.git
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset consists of thermographic images of the feet of 30 pregnant women analyzed over time after receiving epidural anesthesia. The dataset is designed for segmentation tasks, with each image accompanied by one-hot encoded binary masks distinguishing feet from the background.
The dataset is divided into three main partitions:
Each of the 30 cases contains a different number of images, with some cases having more images than others.
The dataset is organized as follows:
[Partition Name]/
├── Images/
│ ├── sample_0.png
│ ├── sample_1.png
│ └── ...
├── Masks/
│ ├── Class_0/ # Background masks
│ │ ├── sample_0.png
│ │ ├── sample_1.png
│ │ └── ...
│ ├── Class_1/ # Feet masks
│ │ ├── sample_0.png
│ │ ├── sample_1.png
│ │ └── ...
This dataset is ideal for:
If you use this dataset in your research or projects, please cite it appropriately.
This dataset is released under the [CC BY-SA 4.0] license. Please review the license terms before using the dataset.
Facebook
TwitterThis dataset is a Kaggle-hosted mirror of the publicly available SICAPv2 dataset for prostate cancer histopathology segmentation. It contains H&E-stained whole-slide image patches and corresponding binary masks indicating cancerous regions.
SICAPv2_Kaggle/
│
├── Train/
│ ├── Images/ # Training images (.jpg)
│ └── Masks/ # Corresponding binary masks (.jpg)
│
└── Val/
├── Images/ # Validation images (.jpg)
└── Masks/ # Corresponding binary masks (.jpg)
.jpgOriginal SICAPv2 dataset and methodology described in:
López et al., "SICAPv2: A dataset for histopathological image analysis of prostate cancer from biopsy samples", Mendeley Data, 2020. ScienceDirect link | Mendeley Data
Note: This is a mirror for ease of use in Kaggle environments. Please cite the original paper if you use this dataset in research.
Facebook
TwitterThe efficient extraction of image data from curved tissue sheets embedded in volumetric imaging data remains a serious and unsolved problem in quantitative studies of embryogenesis. Here we present DeepProjection (DP), a trainable projection algorithm based on deep learning. This algorithm is trained on user-generated training data to locally classify the 3D stack content and rapidly and robustly predict binary masks containing the target content, e.g., tissue boundaries, while masking highly fluorescent out-of-plane artifacts. A projection of the masked 3D stack then yields background-free 2D images with undistorted fluorescence intensity values. The binary masks can further be applied to other fluorescent channels or to extract the local tissue curvature. DP is designed as a first processing step than can be followed, for example, by segmentation to track cell fate. We apply DP to follow the dynamic movements of 2D-tissue sheets during dorsal closure in Drosophila embryos and of the p...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data sets have been simulated to appear similar to natural real world data sets, but with known characteristics. In particular, the number of endmembers is known. The data are stored in Matlab format and further compressed as a zip file. The name of each data set has the format "name_subnwv_simnems.mat.zip", where
"name" is a shorthand name for the corresponding real world data set;
"nwv" is the number of wavelengths/bands in the simulated data set; and
"nems" is the number of endmembers used in the simulated data set.
In addition, the Mt. Isa data set, which does not cover the full rectangle, has associated with it a binary mask, "mtisa_mask.mat.zip", over which any analysis should be applied. Lineage: The simulations were produced using a methodology described in
Hao, Z., Berman, M., Guo, Y., Stone, G. and Johnstone, I. (2016), Semi-realistic simulations of natural hyperspectral scenes. IEEE Journal of Selected Topics in Applied Remote Sensing. To appear.
Facebook
TwitterThis dataset provides an updated digital elevation model (DEM) for the Atchafalaya and Terrebonne basins in coastal Louisiana, USA. The DEM is updated from the Pre-Delta-X DEM and extended to the full Delta-X study area. This DEM was developed from multiple data sources, including sonar data collected during Pre-Delta-X and Delta-X campaigns, bathymetric data from the Coastal Protection and Restoration Authority System-Wide Assessment and Monitoring System (CPRA SWAMP), and NOAA, and topography from the National Elevation Dataset and LiDAR from US Geological Survey (USGS). The provided data layers include the DEM, a binary water/land mask, data source flags, and eight layers with analysis weighting factors for each pixel. Elevation values are provided in meters with respect to the North American Vertical Datum of 1988 (NAVD88). The weighting factors indicate how each data source contributed to this multisource DEM. The data are provided in cloud-optimized GeoTIFF (CoG) format.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
1) Image Data (Matlab format): > T1-weighted images (5 anatomical slices, 11 image/slice) for 210 patients. > Manual annotation (segmetnation) of each image is included as a binary mask. > Timing of each image is included (i.e. inversion delay time) > Label indicating whether the dataset was used for training ('trn') or testing ('tst') 2) Reference T1 mapping images (Matlab format): 1 map/slice, 5 slices/patient
Facebook
TwitterThis dataset contains binary geotiff masks/classifications of six Arctic deltas for channels, lakes, land, and other small water bodies (see methods). Tiff files can be opened with any image viewer, but use of georeferencing data attached to the imagery will require a GIS platform (e.g., QGIS). Dataset includes individually classified scene masks for Colville (2014), Kolyma (2014), Lena (2016), Mackenzie (2014), Yenisei (2013), and Yukon (2014). We also provide .mat files for each delta that include a 2D array of the mosaicked images that is cropped to include only the area used in our analyses (see Piliouras and Rowland, 2020, Journal of Geophysical Research - Earth Surface), as well as the X (easting) and Y (northing) arrays for georeferencing, with coordinates in UTMs.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
I have been using SpaceNet's Open Datasets for the past couple of months and have been absolutely blown away with the quality and added value that they are providing. SpaceNet hosts multiple datasets on aws. I thought it might be useful to upload the Dataset of the SpaceNet 7 Challenge in order to make it more accessible for everyone.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4101651%2Fd2118ce1c51e030c4f2b4f97780fbf27%2Fspacenet7_change.gif?generation=1605457240382918&alt=media" alt="">
This dataset consists of Planet satellite imagery mosaics, which includes 24 images (one per month) covering ~100 unique geographies. The dataset will comprise over 40,000 square kilometers of imagery and exhaustive polygon labels of building footprints in the imagery, totaling over 10 million individual annotations.
Imagery consists of RBGA (red, green, blue, alpha) 8-bit electro-optical (EO) monthly mosaics from Planet’s Dove constellation at 4 meter resolution. For each of the Areas Of Interest (AOIs), the data cube extends for roughly two years, though it varies somewhat between AOIs. All images in a data cube are the same shape, though some data cubes have shape 1024 x 1024 pixels, while others have a shape of 1024 x 1023 pixels. Each image accordingly has an extent of roughly 18 square kilometers.
Images are provided in GeoTiff format, and there are two imagery data types:
images (training only) - Raw imagery, in EPSG:3857 projection.
images_masked (training + testing) - Unusable portions of the image (usually due to cloud cover) have been masked out, in EPSG:3857 projection..
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4101651%2F3e44366a047d980804c6bee26eaa3ea9%2Fudm-labels.png?generation=1605456767526756&alt=media" alt="">
For each monthly mosaic, the SpaceNet labeling team painstakingly outlined the footprint of each building. These GeoJSON vector labels permit tracking of individual building locations (i.e. addresses) over time, hence the moniker: SpaceNet 7 Urban Development Challenge. See Figure 4 for an example of the building footprint labels in one of the training cities.
While building masks are useful for visualization (and for training deep learning segmentation algorithms) the precise vector labels of the SpaceNet 7 dataset permit the assignment of a unique identifier (i.e. address) to each building. Matching these building addresses between time steps is a central theme of the SpaceNet 7 challenge. The figure below displays these building address changes.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4101651%2F88990ba121d3b550820b72caeebdbef6%2Flabels.png?generation=1605457001725966&alt=media" alt="">
The location and shape of known buildings are referred to as ‘ground truth’ in this document. Building footprint labels are distributed in multiple formats for the training set:
labels This folder contains the raw building footprint labels, along with unusable data mask (UDM) labels. UDMs are caused primarily by cloud cover. Building footprint labels will not overlap with UDM areas. In EPSG:4326 projection.
UDM_masks This folder contains the UDM labels rendered as binary masks, in EPSG:4326 projection.
labels_match This folder contains building footprints reprojected into the coordinate reference system (CRS) of the imagery (EPSG:3857 projection). Each building footprint is assigned a unique identifier (i.e. address) that remains consistent throughout the data cube.
labels_match_pix This folder contains building footprints (with identifiers) in pixel coordinates of the image.
CSV format. All building footprints of the whole training set are described in a single CSV file. It is possible to work only with this file, you may or may not find additional value in using the other options listed above. This file has the following format:
filename,**id**,**geometry**
global_monthly_2020_01_mosaic_L15-1281E-1035N_5125_4049_13,42,"POLYGON ((1015.1 621.05, 1003.7 628.8, 1001.5 625.7, 1012.9 617.9, 1015.1 621.05))"
global_monthly_2018_12_mosaic_L15-0369E-1244N_1479_3214_13,0,POLYGON EMPTY
global_monthly_2019_08_mosaic_L15-0697E-0874N_2789_4694_13,10,"POLYGON ((897.11 102.88, 897.11 104.09, 900.29 121.02, 897.11 121.02, 897.11 125.59, 891.85 125.59, 891.85 102.88, 897.11 102.88), (900.29 113.97, 900.29 108.92, 897.11 108.92, 897.11 113.97, 900.29 113.97))"
```
***(The sample above contains 4 lin...
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The size of the Binary Masks market was valued at USD XXX million in 2024 and is projected to reach USD XXX million by 2033, with an expected CAGR of XX% during the forecast period.