The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Paper2Fig100k dataset
A dataset with over 100k images of figures and text captions from research papers. Images of figures display diagrams, methodologies, and architectures of research papers in arXiv.org. We provide also text captions for each figure, and OCR detections and recognitions on the figures (bounding boxes and texts).
The dataset structure consists of a directory called "figures" and two JSON files (train and test), that contain data from each figure. Each JSON object contains the following information about a figure:
Take a look at the OCR-VQGAN GitHub repository, which uses the Paper2Fig100k dataset to train an image encoder for figures and diagrams, that uses OCR perceptual loss to render clear and readable texts inside images.
The dataset is explained in more detail in the paper OCR-VQGAN: Taming Text-within-Image Generation @WACV 2023
Paper abstract
Synthetic image generation has recently experienced significant improvements in domains such as natural image or art generation. However, the problem of figure and diagram generation remains unexplored. A challenging aspect of generating figures and diagrams is effectively rendering readable texts within the images. To alleviate this problem, we present OCR-VQGAN, an image encoder, and decoder that leverages OCR pre-trained features to optimize a text perceptual loss, encouraging the architecture to preserve high-fidelity text and diagram structure. To explore our approach, we introduce the Paper2Fig100k dataset, with over 100k images of figures and texts from research papers. The figures show architecture diagrams and methodologies of articles available at arXiv.org from fields like artificial intelligence and computer vision. Figures usually include text and discrete objects, e.g., boxes in a diagram, with lines and arrows that connect them. We demonstrate the superiority of our method by conducting several experiments on the task of figure reconstruction. Additionally, we explore the qualitative and quantitative impact of weighting different perceptual metrics in the overall loss function.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
These data were gathered during the Books, Minds, and Bodies research project in 2015-16. The project was designed to investigate the therapeutic potential of shared reading, and involved running 2 reading groups over 2 consecutive terms and recording participants' discussions of the texts being read aloud together. These recordings were subsequently transcribed and used for analysis of emotional variance and linguistic similarity.
Consistent with the ethical approval granted for the study, word order in the transcripts has been randomized so as to preclude any personal data being disclosed. This was done by tokenizing the text of each transcript into grammatical and lexical units (i.e. punctuation signs and words). These were shuffled using the "Random" module in the Python programming language, which provides a range of mathematical operations for collections of discrete objects. Nevertheless, grouping variables were preserved at the level of group (MT and HT terms) and session ID. As the calculation of values for emotional variance (on the dimensions of valence, arousal, and dominance) does not require syntax to be preserved, randomizing the data in this way should not affect the future calculation of word norm values.
The dataset also includes text/discussion similarity calculations, qualitative coding results, and participants' post-participation feedback data.
NB: this dataset replaces 'Books, Minds, and Bodies: raw transcript text plus VAD values' at https://ora.ox.ac.uk/objects/uuid:c370b75b-d37e-41be-89bb-cbb67a0c8614
Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making. This dataset is for internet connected Televisions. Article will be published in PlosOne.
Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making.This dataset is for internet connected wearables. Article will be published in PlosOne.
https://choosealicense.com/licenses/bsd-3-clause/https://choosealicense.com/licenses/bsd-3-clause/
Scene parsing is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bed. MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts. Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing. There are totally 150 semantic categories included for evaluation, which include stuffs like sky, road, grass, and discrete objects like person, car, bed. Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene.
Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making. This dataset is for internet connected Thermostats.Article will be published in PlosOne.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset of 418 primary school teachers' preferences on ICT-based teaching characteristics, analyzed using Discrete Choice Models, specifically McFadden's conditional logit model. The data includes variables such as subject area, grade level, and interactivity of digital resources. Each multivariate response is represented by three successive rows.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This submission contains a dataset used in the paper
"Ajinkya Kadu, Felix Lucka, and K. Joost Batenburg. "Single-shot Tomography of Discrete Dynamic Objects." arXiv preprint arXiv:2311.05269 (2023)."
The data collection has been acquired using a highly flexible, programmable and custom-built X-ray CT scanner, the FleX-ray scanner, developed by TESCAN-XRE NV, located in the FleX-ray Lab at the Centrum Wiskunde & Informatica (CWI) in Amsterdam, Netherlands. It consists of a cone-beam microfocus X-ray point source (limited to 90 kV and 90 W) that projects polychromatic X-rays onto a 14-bit CMOS (complementary metal-oxide semiconductor) flat panel detector with CsI(Tl) scintillator (Dexella 1512NDT). To create a 2D dataset, a fan-beam geometry was mimicked by only reading out the central row of the detector, which results in 956 detector pixel with an effective length of 149.6 μm each. Between source and detector there is a rotation stage, upon which the sample was mounted. The sample that we imaged was a dog toy in a shape of a bone made of a rubber. The X-ray tube voltage was 90kV and a copper filter was used to block the low-energy part of the spectrum to limit beam-hardening artifacts. The source-to-detector distance was 487.9 mm, while the source-to-origin of the sample was 374.5 mm in a fan-beam geometry. We acquired 673 z-slices with 0.25 mm distance between slices. Further information about the technical details of X-ray CT can be found in the above paper and in
Maximilian B. Kiss, Sophia B. Coban, K. Joost Batenburg, Tristan van Leeuwen, and Felix Lucka “2DeteCT - A large 2D expandable, trainable, experimental Computed Tomography dataset for machine learning", Sci Data 10, 576 (2023) or arXiv:2306.05907 (2023)
The upload consists of two files, namely:
GrayBone90kV4Filter.zip: contains the raw measurement data.
GrayBone90kV4FilterPreprocessed.mat: contains preprocessed data to be used in the MATLAB script provided to do pseudo-dynamic tomography. It also contains reference reconstruction obtained via Filtered Back Projection (FBP) algorithm.
In the Github repository https://github.com/ajinkyakadu/DynamicXRayCT, we provide the scripts to read and process the raw data. The Github repository also contains all the scripts to reconstruct the dynamic solution using advanced algorithms. Furthermore, the raw data formats are described in great details in Kiss et al 2023 paper referenced above.
Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making. This dataset is for internet connected security cameras. Article will be published in PlosOne.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundChildhood onset uveitis comprises a group of rare inflammatory disorders characterized by clinical heterogeneity, chronicity, and uncertainties around long term outcomes. Standardized, detailed datasets with harmonized clinical definitions and terminology are needed to enable the clinical research necessary to stratify disease phenotype and interrogate the putative determinants of health outcomes. We aimed to develop a core routine clinical collection dataset for clinicians managing children with uveitis, suitable for multicenter and national clinical and experimental research initiatives.MethodsDevelopment of the dataset was undertaken in three phases: phase 1, a rapid review of published datasets used in clinical research studies; phase 2, a scoping review of disease or drug registries, national cohort studies and core outcome sets; and phase 3, a survey of members of a multicenter clinical network of specialists. Phases 1 and 2 provided candidates for a long list of variables for the dataset. In Phase 3, members of the UK's national network of stakeholder clinicians who manage childhood uveitis (the Pediatric Ocular Inflammation Group) were invited to select from this long-list their essential items for the core clinical dataset, to identify any omissions, and to support or revise the clinical definitions. Variables which met a threshold of at least 95% agreement were selected for inclusion in the core clinical dataset.ResultsThe reviews identified 42 relevant studies, and 9 disease or drug registries. In total, 138 discrete items were identified as candidates for the long-list. Of the 41 specialists invited to take part in the survey, 31 responded (response rate 78%). The survey resulted in inclusion of 89 data items within the final core dataset: 81 items to be collected at the first visit, and 64 items at follow up visits.DiscussionWe report development of a novel consensus core clinical dataset for the routine collection of clinical data for children diagnosed with non-infectious uveitis. The development of the dataset will provide a standardized approach to data capture able to support observational clinical studies embedded within routine clinical care and electronic patient record capture. It will be validated through a national prospective cohort study, the Uveitis in childhood prospective national cohort study (UNICORNS).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This project is a collection of files to allow users to reproduce the model development and benchmarking in "Dawnn: single-cell differential abundance with neural networks" (Hall and Castellano, under review). Dawnn is a tool for detecting differential abundance in single-cell RNAseq datasets. It is available as an R package here. Please contact us if you are unable to reproduce any of the analysis in our paper. The files in this collection correspond to the benchmarking dataset based on simulated discrete clusters.
FILES: Data processing code
adapted_discrete_clusters_sim_milo_paper.R Lightly adapted code from Dann et al. to simulate single-cell RNAseq datasets that form discrete clusters . generate_test_data_discrete_clusters_sim_milo_paper.R R code to assign simulated labels to datatsets generated from adapted_discrete_clusters_sim_milo_paper.R. Seurat objects saved as cells_sim_discerete_clusters_gex_seed_*.rds. Simulated labels saved as benchmark_dataset_sim_discrete_clusters.csv.
Resulting datasets
cells_sim_discerete_clusters_gex_seed_*.rds Seurat objects generated by generate_test_data_discrete_clusters_sim_milo_paper.R. benchmark_dataset_sim_discrete_clusters.csv Cell labels generated by generate_test_data_discrete_clusters_sim_milo_paper.R.
Urban Habitat Connectivity Project (UHCP) Short description: A package of data containing potential habitat and fragmentation for seven species groups in the urban ACT. Each species group has two layer files. Connected habitat layers show potential core and corridor habitat for the species group, and connectivity/fragmentation between these habitat patches. Remnant patches layers contain areas which are predicted to be fragmented and inaccessible for the species group, but may be important for restoration activities. These layers are outputs of ecological connectivity modelling and have been developed using spatial data representing habitat and connectivity requirements specific to the species group. The following attributes are available in the data table for Connected Habitat layers:
Species Group* - indicates the species group of interestPatch ID – a unique identifier for each ‘patch’ of connected habitat, an ID that is given to group all habitat areas which are predicted to be connected to each other.Habitat Type* – identifies if the polygon meets core or corridor habitat requirements, or if it is a remnant patch.Habitat Number – a numeric value linked to Habitat Type to support statistics and symbology. Core habitat has a value of 0 and corridor habitat has a value of 1.Patch Area (Ha)* – the area of the individual polygon in hectares.Connected Habitat Area (Ha) – the total area of potential habitat in the connected patch, determined by summing the Patch Area for all polygons with the same Patch ID.Shape area – the polygon’s area, calculated by default in meters squared.Shape length – the length of the line enclosing the polygon, calculated by default in meters squared.
SHARING Licenses/restrictions on use: Creative Commons By Attribution 4.0 (Australian Capital Territory) How to cite this data: ACT Government, 2023. Potential Habitat and Fragmentation in Urban ACT dataset, version 3. Polygon layer developed by the Office of Nature Conservation, Environment, Planning and Sustainable Development Directorate, Canberra. CONTACT For accessibility issues or data enquiries please contact the Connecting Nature, Connecting People team cncp@act.gov.au.
STRCT_ARC: Structures are discrete, physically existing things that are built. Structures are things created to support treatment, recreation or other management activities. STRCT_ARC contains, but is not limited to, features such as pipelines, fences, and trails.
Simantha is a discrete event simulation package written in Python that is designed to model the behavior of discrete manufacturing systems. Specifically, it focuses on asynchronous production lines with finite buffers. It also provides functionality for modeling the degradation and maintenance of machines in these systems. Classes for five basic manufacturing objects are included: source, machine, buffer, sink, and maintainer. These objects can be defined by the user and configured in different ways to model various real-world manufacturing systems. The object classes are also designed to be extensible so that they can be used to model more complex processes.In addition to modeling the behavior of existing systems, Simantha is also intended for use with simulation-based optimization and planning applications. For instance, users may be interested in evaluating alternative maintenance policies for a particular system. Estimating the expected system performance under each candidate policy will require a large number of simulation replications when the system is subject to a high degree of stochasticity. Simantha therefore supports parallel simulation replications to make this procedure more efficient.Github repository: https://github.com/usnistgov/simantha
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Urban Habitat Connectivity Project (UHCP)Short description: A package of data containing potential habitat and fragmentation for seven species groups in the urban ACT. Each species group has two layer files. Connected habitat layers show potential core and corridor habitat for the species group, and connectivity/fragmentation between these habitat patches. Remnant patches layers contain areas which are predicted to be fragmented and inaccessible for the species group, but may be important for restoration activities. These layers are outputs of ecological connectivity modelling and have been developed using spatial data representing habitat and connectivity requirements specific to the species group.The following attributes are available in the data table for Connected Habitat layers:Species Group* - indicates the species group of interestPatch ID – a unique identifier for each ‘patch’ of connected habitat, an ID that is given to group all habitat areas which are predicted to be connected to each other.Habitat Type* – identifies if the polygon meets core or corridor habitat requirements, or if it is a remnant patch.Habitat Number – a numeric value linked to Habitat Type to support statistics and symbology. Core habitat has a value of 0 and corridor habitat has a value of 1.Patch Area (Ha)* – the area of the individual polygon in hectares.Connected Habitat Area (Ha) – the total area of potential habitat in the connected patch, determined by summing the Patch Area for all polygons with the same Patch ID.Shape area – the polygon’s area, calculated by default in meters squared.Shape length – the length of the line enclosing the polygon, calculated by default in meters squared.* Is also available in the data table for Remnant Patches layers.Spatial resolution: 1:10,000Coordinate system: GDA2020 MGA zone 55METHODSData collection / creation: Spatial layers for habitat and barriers were created and input into a habitat connectivity/fragmentation model specifically designed for the species group. The model was developed using metrics derived from expert elicitation. These metrics quantified essential habitat and connectivity requirements for the species group, for example the preferred spacing of trees, the maximum crossable width of a road, the typical dispersal distance, etc. The model identified habitat and barriers to connectivity, based on the metrics which could be mapped. Habitat was delineated by patch size to determine core and corridor habitat, and to remove areas which are too small to be functional. The habitat type is visible in the attribute table of the data.Connectivity between habitat patches is dependent on the species group’s dispersal capacity and the availability of core habitat, suitable corridors and a path without barriers. To assess this core habitat areas were buffered by the species group’s dispersal distance. This identified how far an individual will move to find a new core habitat patch. Movement to this distance is dependent on a suitable path. All habitat was buffered by the distance the species can move outside habitat (through non-habitat areas). This identified how far an individual will move outside any habitat (core or corridor) before they require another habitat patch (i.e. how far they can travel between stepping stones).Connectivity is further complicated by impassable barriers. Barriers were used to slice up the dispersal buffers and identify ‘dispersal patches’, areas which an individual can move within. Fragmentation is seen when a barrier is present, patches are too far from core habitat, or corridor habitat is too far apart.A unique ID was applied to each patch and represents connectivity/fragmentation. The patches were intersected with habitat to apply the new ID to the habitat areas. The final model outputs identify areas of potential core, corridor or remnant (inaccessible) habitat. Core and corridor habitat are viewable in the connected habitat dataset, whilst remnant patches are available separately. The data was simplified using the Douglas-Peuker algorithm, a tolerance of 0.5-2m, minimum size of 2-5m2 for retention, and holes filled in if less than 20m2. Small adjoining slithers <20m2 were dissolved into neighbouring polygons to optimise drawing speeds. Please contact the project team for the model script or further details on the methodology.NOTES ON USEQuality: The habitat connectivity modelling used to produce the data was informed by work by the City of Melbourne (Kirk et al., 2018). The original methods were expanded on, with habitat and connectivity requirements (metrics) specific to the species group determined from expert elicitation and further analysis to consider patch size for core or corridor patches. The expert elicitation process provided the best and most relevant quantitative description of habitat and barriers available (for a species group rather than a specific species). The input datasets were then tailored to the metrics for this project. Existing datasets were refined to be relevant and reflect the metrics identified through expert elicitation. New datasets were created where data was missing. All data was derived from existing authoritative sources and/or remotely sensed data. This data curation process ensured the input datasets, and resulting output, were relevant and fit for purpose.Limitations: This data should be considered indicative only as there are limitations to the modelling process. It considers all habitat and barriers equally and as discrete objects (i.e. it applies a discrete boundary around a patch and does not account for gradients or flexible boundaries).The model predicts habitat and connectivity based on the data available. It does not assess whether a species is present or consider temporal variability. Some habitat requirements are not mapped (e.g. native vegetation, lack of predators) due to the lack of an accurate or complete dataset. Some of these requirements are critical to the success of the species group. These habitat requirements are available and have been derived from expert elicitation. They should be considered at an area of interest.The model assumes the input data is up to date and accurate. Many of the habitat and barrier datasets used as inputs into the models are in some way informed by remote sensing data. Remote sensing data has limitations, such as potential for misclassification (e.g. bare ground and pavement could be confused). Additionally, remotely sensed data captures a point in time and will become outdated. Manual checks and improvements using supplementary data for specific sites have been completed to reduce as much error as possible.Data refinement: Unmapped habitat and connectivity requirements should be considered when using the data. The full list of known habitat and connectivity requirements for each species group, including those considered by the model and those unaccounted for, is available by request. Other data may also be used to track changes post-LiDAR capture. For example, new development footprints may be used to remove non-habitat areas and can be done so at a faster rate than waiting for new LiDAR captures and re-running the model.SHARINGLicenses/restrictions on use: Creative Commons By Attribution 4.0 (Australian Capital Territory)How to cite this data: ACT Government, 2023. Potential Habitat and Fragmentation in Urban ACT dataset, version 3. Polygon layer developed by the Office of Nature Conservation, Environment, Planning and Sustainable Development Directorate, Canberra.CONTACTFor accessibility issues or data enquiries please contact the Connecting Nature, Connecting People team cncp@act.gov.au.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Urban Habitat Connectivity Project (UHCP)Short description: A package of data containing potential habitat and fragmentation for seven species groups in the urban ACT. Each species group has two layer files. Connected habitat layers show potential core and corridor habitat for the species group, and connectivity/fragmentation between these habitat patches. Remnant patches layers contain areas which are predicted to be fragmented and inaccessible for the species group, but may be important for restoration activities. These layers are outputs of ecological connectivity modelling and have been developed using spatial data representing habitat and connectivity requirements specific to the species group.The following attributes are available in the data table for Connected Habitat layers:Species Group* - indicates the species group of interestPatch ID – a unique identifier for each ‘patch’ of connected habitat, an ID that is given to group all habitat areas which are predicted to be connected to each other.Habitat Type* – identifies if the polygon meets core or corridor habitat requirements, or if it is a remnant patch.Habitat Number – a numeric value linked to Habitat Type to support statistics and symbology. Core habitat has a value of 0 and corridor habitat has a value of 1.Patch Area (Ha)* – the area of the individual polygon in hectares.Connected Habitat Area (Ha) – the total area of potential habitat in the connected patch, determined by summing the Patch Area for all polygons with the same Patch ID.Shape area – the polygon’s area, calculated by default in meters squared.Shape length – the length of the line enclosing the polygon, calculated by default in meters squared.* Is also available in the data table for Remnant Patches layers.Spatial resolution: 1:10,000Coordinate system: GDA2020 MGA zone 55METHODSData collection / creation: Spatial layers for habitat and barriers were created and input into a habitat connectivity/fragmentation model specifically designed for the species group. The model was developed using metrics derived from expert elicitation. These metrics quantified essential habitat and connectivity requirements for the species group, for example the preferred spacing of trees, the maximum crossable width of a road, the typical dispersal distance, etc. The model identified habitat and barriers to connectivity, based on the metrics which could be mapped. Habitat was delineated by patch size to determine core and corridor habitat, and to remove areas which are too small to be functional. The habitat type is visible in the attribute table of the data.Connectivity between habitat patches is dependent on the species group’s dispersal capacity and the availability of core habitat, suitable corridors and a path without barriers. To assess this core habitat areas were buffered by the species group’s dispersal distance. This identified how far an individual will move to find a new core habitat patch. Movement to this distance is dependent on a suitable path. All habitat was buffered by the distance the species can move outside habitat (through non-habitat areas). This identified how far an individual will move outside any habitat (core or corridor) before they require another habitat patch (i.e. how far they can travel between stepping stones).Connectivity is further complicated by impassable barriers. Barriers were used to slice up the dispersal buffers and identify ‘dispersal patches’, areas which an individual can move within. Fragmentation is seen when a barrier is present, patches are too far from core habitat, or corridor habitat is too far apart.A unique ID was applied to each patch and represents connectivity/fragmentation. The patches were intersected with habitat to apply the new ID to the habitat areas. The final model outputs identify areas of potential core, corridor or remnant (inaccessible) habitat. Core and corridor habitat are viewable in the connected habitat dataset, whilst remnant patches are available separately. The data was simplified using the Douglas-Peuker algorithm, a tolerance of 0.5-2m, minimum size of 2-5m2 for retention, and holes filled in if less than 20m2. Small adjoining slithers <20m2 were dissolved into neighbouring polygons to optimise drawing speeds. Please contact the project team for the model script or further details on the methodology.NOTES ON USEQuality: The habitat connectivity modelling used to produce the data was informed by work by the City of Melbourne (Kirk et al., 2018). The original methods were expanded on, with habitat and connectivity requirements (metrics) specific to the species group determined from expert elicitation and further analysis to consider patch size for core or corridor patches. The expert elicitation process provided the best and most relevant quantitative description of habitat and barriers available (for a species group rather than a specific species). The input datasets were then tailored to the metrics for this project. Existing datasets were refined to be relevant and reflect the metrics identified through expert elicitation. New datasets were created where data was missing. All data was derived from existing authoritative sources and/or remotely sensed data. This data curation process ensured the input datasets, and resulting output, were relevant and fit for purpose.Limitations: This data should be considered indicative only as there are limitations to the modelling process. It considers all habitat and barriers equally and as discrete objects (i.e. it applies a discrete boundary around a patch and does not account for gradients or flexible boundaries).The model predicts habitat and connectivity based on the data available. It does not assess whether a species is present or consider temporal variability. Some habitat requirements are not mapped (e.g. native vegetation, lack of predators) due to the lack of an accurate or complete dataset. Some of these requirements are critical to the success of the species group. These habitat requirements are available and have been derived from expert elicitation. They should be considered at an area of interest.The model assumes the input data is up to date and accurate. Many of the habitat and barrier datasets used as inputs into the models are in some way informed by remote sensing data. Remote sensing data has limitations, such as potential for misclassification (e.g. bare ground and pavement could be confused). Additionally, remotely sensed data captures a point in time and will become outdated. Manual checks and improvements using supplementary data for specific sites have been completed to reduce as much error as possible.Data refinement: Unmapped habitat and connectivity requirements should be considered when using the data. The full list of known habitat and connectivity requirements for each species group, including those considered by the model and those unaccounted for, is available by request. Other data may also be used to track changes post-LiDAR capture. For example, new development footprints may be used to remove non-habitat areas and can be done so at a faster rate than waiting for new LiDAR captures and re-running the model.SHARINGLicenses/restrictions on use: Creative Commons By Attribution 4.0 (Australian Capital Territory)How to cite this data: ACT Government, 2023. Potential Habitat and Fragmentation in Urban ACT dataset, version 3. Polygon layer developed by the Office of Nature Conservation, Environment, Planning and Sustainable Development Directorate, Canberra.CONTACTFor accessibility issues or data enquiries please contact the Connecting Nature, Connecting People team cncp@act.gov.au.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.