17 datasets found
  1. P

    ADE20K Dataset

    • paperswithcode.com
    Updated Jan 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bolei Zhou; Hang Zhao; Xavier Puig; Sanja Fidler; Adela Barriuso; Antonio Torralba (2019). ADE20K Dataset [Dataset]. https://paperswithcode.com/dataset/ade20k
    Explore at:
    Dataset updated
    Jan 9, 2019
    Authors
    Bolei Zhou; Hang Zhao; Xavier Puig; Sanja Fidler; Adela Barriuso; Antonio Torralba
    Description

    The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.

  2. Paper2Fig100k dataset

    • zenodo.org
    application/gzip
    Updated Nov 8, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan A. Rodríguez; David Vázquez; Issam Laradji; Marco Pedersoli; Pau Rodríguez; Juan A. Rodríguez; David Vázquez; Issam Laradji; Marco Pedersoli; Pau Rodríguez (2022). Paper2Fig100k dataset [Dataset]. http://doi.org/10.5281/zenodo.7299423
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Nov 8, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Juan A. Rodríguez; David Vázquez; Issam Laradji; Marco Pedersoli; Pau Rodríguez; Juan A. Rodríguez; David Vázquez; Issam Laradji; Marco Pedersoli; Pau Rodríguez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Paper2Fig100k dataset

    A dataset with over 100k images of figures and text captions from research papers. Images of figures display diagrams, methodologies, and architectures of research papers in arXiv.org. We provide also text captions for each figure, and OCR detections and recognitions on the figures (bounding boxes and texts).

    The dataset structure consists of a directory called "figures" and two JSON files (train and test), that contain data from each figure. Each JSON object contains the following information about a figure:

    • figure_id: Figure identification based on the arXiv identifier:
    • captions: Text pairs extracted from the paper that relates to the figure. For instance, the actual caption of the figure or references to the figure in the manuscript.
    • ocr_result: Result of performing OCR text recognition over the image. We provide a list of triplets (bounding box, confidence, text) present in the image.
    • aspect: Aspect ratio of the image (H/W).

    Take a look at the OCR-VQGAN GitHub repository, which uses the Paper2Fig100k dataset to train an image encoder for figures and diagrams, that uses OCR perceptual loss to render clear and readable texts inside images.

    The dataset is explained in more detail in the paper OCR-VQGAN: Taming Text-within-Image Generation @WACV 2023

    Paper abstract

    Synthetic image generation has recently experienced significant improvements in domains such as natural image or art generation. However, the problem of figure and diagram generation remains unexplored. A challenging aspect of generating figures and diagrams is effectively rendering readable texts within the images. To alleviate this problem, we present OCR-VQGAN, an image encoder, and decoder that leverages OCR pre-trained features to optimize a text perceptual loss, encouraging the architecture to preserve high-fidelity text and diagram structure. To explore our approach, we introduce the Paper2Fig100k dataset, with over 100k images of figures and texts from research papers. The figures show architecture diagrams and methodologies of articles available at arXiv.org from fields like artificial intelligence and computer vision. Figures usually include text and discrete objects, e.g., boxes in a diagram, with lines and arrows that connect them. We demonstrate the superiority of our method by conducting several experiments on the task of figure reconstruction. Additionally, we explore the qualitative and quantitative impact of weighting different perceptual metrics in the overall loss function.

  3. o

    Books, Minds, and Bodies dataset

    • ora.ox.ac.uk
    Updated Jan 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Troscianko, E; Carney, J; Holman, E (2022). Books, Minds, and Bodies dataset [Dataset]. http://doi.org/10.5287/bodleian:gJZz9KDE0
    Explore at:
    (10133), (124412), (10276), (41302)Available download formats
    Dataset updated
    Jan 1, 2022
    Dataset provided by
    University of Oxford
    Authors
    Troscianko, E; Carney, J; Holman, E
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    These data were gathered during the Books, Minds, and Bodies research project in 2015-16. The project was designed to investigate the therapeutic potential of shared reading, and involved running 2 reading groups over 2 consecutive terms and recording participants' discussions of the texts being read aloud together. These recordings were subsequently transcribed and used for analysis of emotional variance and linguistic similarity.

    Consistent with the ethical approval granted for the study, word order in the transcripts has been randomized so as to preclude any personal data being disclosed. This was done by tokenizing the text of each transcript into grammatical and lexical units (i.e. punctuation signs and words). These were shuffled using the "Random" module in the Python programming language, which provides a range of mathematical operations for collections of discrete objects. Nevertheless, grouping variables were preserved at the level of group (MT and HT terms) and session ID. As the calculation of values for emotional variance (on the dimensions of valence, arousal, and dominance) does not require syntax to be preserved, randomizing the data in this way should not affect the future calculation of word norm values.

    The dataset also includes text/discussion similarity calculations, qualitative coding results, and participants' post-participation feedback data.

    NB: this dataset replaces 'Books, Minds, and Bodies: raw transcript text plus VAD values' at https://ora.ox.ac.uk/objects/uuid:c370b75b-d37e-41be-89bb-cbb67a0c8614

  4. u

    Internet of Things Discrete Choice Experiment SmartTV data

    • rdr.ucl.ac.uk
    bin
    Updated Jan 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shane Johnson; John Blythe; Gabriel Wong; Manning Matthew (2020). Internet of Things Discrete Choice Experiment SmartTV data [Dataset]. http://doi.org/10.5522/04/11568729.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 14, 2020
    Dataset provided by
    University College London
    Authors
    Shane Johnson; John Blythe; Gabriel Wong; Manning Matthew
    License

    Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
    License information was derived automatically

    Description

    The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making. This dataset is for internet connected Televisions. Article will be published in PlosOne.

  5. u

    Internet of Things Discrete Choice Experiment Wearables data

    • rdr.ucl.ac.uk
    bin
    Updated Jan 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shane Johnson; John Blythe; Manning Matthew; Gabriel Wong (2020). Internet of Things Discrete Choice Experiment Wearables data [Dataset]. http://doi.org/10.5522/04/11568759.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 14, 2020
    Dataset provided by
    University College London
    Authors
    Shane Johnson; John Blythe; Manning Matthew; Gabriel Wong
    License

    Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
    License information was derived automatically

    Description

    The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making.This dataset is for internet connected wearables. Article will be published in PlosOne.

  6. h

    ade_resize

    • huggingface.co
    Updated Nov 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xkull (2024). ade_resize [Dataset]. https://huggingface.co/datasets/Xkull/ade_resize
    Explore at:
    Dataset updated
    Nov 5, 2024
    Authors
    Xkull
    License

    https://choosealicense.com/licenses/bsd-3-clause/https://choosealicense.com/licenses/bsd-3-clause/

    Description

    Scene parsing is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bed. MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts. Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing. There are totally 150 semantic categories included for evaluation, which include stuffs like sky, road, grass, and discrete objects like person, car, bed. Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene.

  7. u

    Internet of Things Discrete Choice Experiment - Thermostats data

    • rdr.ucl.ac.uk
    bin
    Updated Jan 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shane Johnson; John Blythe; Manning Matthew; Gabriel Wong (2020). Internet of Things Discrete Choice Experiment - Thermostats data [Dataset]. http://doi.org/10.5522/04/11568438.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 14, 2020
    Dataset provided by
    University College London
    Authors
    Shane Johnson; John Blythe; Manning Matthew; Gabriel Wong
    License

    Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
    License information was derived automatically

    Description

    The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making. This dataset is for internet connected Thermostats.Article will be published in PlosOne.

  8. Revealing primary teachers' preferences for general characteristics of...

    • figshare.com
    xlsx
    Updated Aug 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marilia Kostaki; Michalis Linardakis (2024). Revealing primary teachers' preferences for general characteristics of ICT-based teaching through Discrete Choice Models [Dataset]. http://doi.org/10.6084/m9.figshare.26550322.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Aug 14, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Marilia Kostaki; Michalis Linardakis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset of 418 primary school teachers' preferences on ICT-based teaching characteristics, analyzed using Discrete Choice Models, specifically McFadden's conditional logit model. The data includes variables such as subject area, grade level, and interactivity of digital resources. Each multivariate response is represented by three successive rows.

  9. Z

    Data from: Slice-by-Slice X-ray Tomography dataset of Dog Toy

    • data.niaid.nih.gov
    • repository.uantwerpen.be
    Updated Mar 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kadu, Ajinkya (2024). Slice-by-Slice X-ray Tomography dataset of Dog Toy [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10808363
    Explore at:
    Dataset updated
    Mar 12, 2024
    Dataset provided by
    Lucka, Felix
    Batenburg, Kees Joost
    Kadu, Ajinkya
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This submission contains a dataset used in the paper

    "Ajinkya Kadu, Felix Lucka, and K. Joost Batenburg. "Single-shot Tomography of Discrete Dynamic Objects." arXiv preprint arXiv:2311.05269 (2023)."

    The data collection has been acquired using a highly flexible, programmable and custom-built X-ray CT scanner, the FleX-ray scanner, developed by TESCAN-XRE NV, located in the FleX-ray Lab at the Centrum Wiskunde & Informatica (CWI) in Amsterdam, Netherlands. It consists of a cone-beam microfocus X-ray point source (limited to 90 kV and 90 W) that projects polychromatic X-rays onto a 14-bit CMOS (complementary metal-oxide semiconductor) flat panel detector with CsI(Tl) scintillator (Dexella 1512NDT). To create a 2D dataset, a fan-beam geometry was mimicked by only reading out the central row of the detector, which results in 956 detector pixel with an effective length of 149.6 μm each. Between source and detector there is a rotation stage, upon which the sample was mounted. The sample that we imaged was a dog toy in a shape of a bone made of a rubber. The X-ray tube voltage was 90kV and a copper filter was used to block the low-energy part of the spectrum to limit beam-hardening artifacts. The source-to-detector distance was 487.9 mm, while the source-to-origin of the sample was 374.5 mm in a fan-beam geometry. We acquired 673 z-slices with 0.25 mm distance between slices. Further information about the technical details of X-ray CT can be found in the above paper and in

    Maximilian B. Kiss, Sophia B. Coban, K. Joost Batenburg, Tristan van Leeuwen, and Felix Lucka “2DeteCT - A large 2D expandable, trainable, experimental Computed Tomography dataset for machine learning", Sci Data 10, 576 (2023) or arXiv:2306.05907 (2023)

    The upload consists of two files, namely:

    GrayBone90kV4Filter.zip: contains the raw measurement data.

    GrayBone90kV4FilterPreprocessed.mat: contains preprocessed data to be used in the MATLAB script provided to do pseudo-dynamic tomography. It also contains reference reconstruction obtained via Filtered Back Projection (FBP) algorithm.

    In the Github repository https://github.com/ajinkyakadu/DynamicXRayCT, we provide the scripts to read and process the raw data. The Github repository also contains all the scripts to reconstruct the dynamic solution using advanced algorithms. Furthermore, the raw data formats are described in great details in Kiss et al 2023 paper referenced above.

  10. u

    Internet of Things Discrete Choice Experiment - Security Camera data

    • rdr.ucl.ac.uk
    bin
    Updated Jan 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shane Johnson; John Blythe; Manning Matthew; Gabriel Wong (2020). Internet of Things Discrete Choice Experiment - Security Camera data [Dataset]. http://doi.org/10.5522/04/11568492.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 14, 2020
    Dataset provided by
    University College London
    Authors
    Shane Johnson; John Blythe; Manning Matthew; Gabriel Wong
    License

    Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
    License information was derived automatically

    Description

    The data are for a discrete choice experiment that examined the impact of security labels, device functionality and price on consumer decision making. This dataset is for internet connected security cameras. Article will be published in PlosOne.

  11. f

    Data_Sheet_1_Development of a Nationally Agreed Core Clinical Dataset for...

    • figshare.com
    pdf
    Updated Jun 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ameenat Lola Solebo; Salomey Kellett; Jugnoo Rahi; Reshma Pattani; Clive Edelsten; Andrew D. Dick; Alastair Denniston; The Pediatric Ocular Inflammation UNICORN Study Group (2023). Data_Sheet_1_Development of a Nationally Agreed Core Clinical Dataset for Childhood Onset Uveitis.PDF [Dataset]. http://doi.org/10.3389/fped.2022.881398.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 17, 2023
    Dataset provided by
    Frontiers
    Authors
    Ameenat Lola Solebo; Salomey Kellett; Jugnoo Rahi; Reshma Pattani; Clive Edelsten; Andrew D. Dick; Alastair Denniston; The Pediatric Ocular Inflammation UNICORN Study Group
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundChildhood onset uveitis comprises a group of rare inflammatory disorders characterized by clinical heterogeneity, chronicity, and uncertainties around long term outcomes. Standardized, detailed datasets with harmonized clinical definitions and terminology are needed to enable the clinical research necessary to stratify disease phenotype and interrogate the putative determinants of health outcomes. We aimed to develop a core routine clinical collection dataset for clinicians managing children with uveitis, suitable for multicenter and national clinical and experimental research initiatives.MethodsDevelopment of the dataset was undertaken in three phases: phase 1, a rapid review of published datasets used in clinical research studies; phase 2, a scoping review of disease or drug registries, national cohort studies and core outcome sets; and phase 3, a survey of members of a multicenter clinical network of specialists. Phases 1 and 2 provided candidates for a long list of variables for the dataset. In Phase 3, members of the UK's national network of stakeholder clinicians who manage childhood uveitis (the Pediatric Ocular Inflammation Group) were invited to select from this long-list their essential items for the core clinical dataset, to identify any omissions, and to support or revise the clinical definitions. Variables which met a threshold of at least 95% agreement were selected for inclusion in the core clinical dataset.ResultsThe reviews identified 42 relevant studies, and 9 disease or drug registries. In total, 138 discrete items were identified as candidates for the long-list. Of the 41 specialists invited to take part in the survey, 31 responded (response rate 78%). The survey resulted in inclusion of 89 data items within the final core dataset: 81 items to be collected at the first visit, and 64 items at follow up visits.DiscussionWe report development of a novel consensus core clinical dataset for the routine collection of clinical data for children diagnosed with non-infectious uveitis. The development of the dataset will provide a standardized approach to data capture able to support observational clinical studies embedded within routine clinical care and electronic patient record capture. It will be validated through a national prospective cohort study, the Uveitis in childhood prospective national cohort study (UNICORNS).

  12. u

    Dawnn benchmarking dataset: Simulated discrete clusters processing and label...

    • rdr.ucl.ac.uk
    application/gzip
    Updated May 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    George Hall; Sergi Castellano Hereza (2023). Dawnn benchmarking dataset: Simulated discrete clusters processing and label simulation [Dataset]. http://doi.org/10.5522/04/22616590.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 4, 2023
    Dataset provided by
    University College London
    Authors
    George Hall; Sergi Castellano Hereza
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This project is a collection of files to allow users to reproduce the model development and benchmarking in "Dawnn: single-cell differential abundance with neural networks" (Hall and Castellano, under review). Dawnn is a tool for detecting differential abundance in single-cell RNAseq datasets. It is available as an R package here. Please contact us if you are unable to reproduce any of the analysis in our paper. The files in this collection correspond to the benchmarking dataset based on simulated discrete clusters.

    FILES: Data processing code

    adapted_discrete_clusters_sim_milo_paper.R Lightly adapted code from Dann et al. to simulate single-cell RNAseq datasets that form discrete clusters . generate_test_data_discrete_clusters_sim_milo_paper.R R code to assign simulated labels to datatsets generated from adapted_discrete_clusters_sim_milo_paper.R. Seurat objects saved as cells_sim_discerete_clusters_gex_seed_*.rds. Simulated labels saved as benchmark_dataset_sim_discrete_clusters.csv.

    Resulting datasets

    cells_sim_discerete_clusters_gex_seed_*.rds Seurat objects generated by generate_test_data_discrete_clusters_sim_milo_paper.R. benchmark_dataset_sim_discrete_clusters.csv Cell labels generated by generate_test_data_discrete_clusters_sim_milo_paper.R.

  13. l

    Grassland Reptiles Remnant Patches

    • devweb.dga.links.com.au
    Updated Oct 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ACT Government ACTMAPi (2024). Grassland Reptiles Remnant Patches [Dataset]. https://devweb.dga.links.com.au/data/dataset/grassland-reptiles-remnant-patches
    Explore at:
    zip, arcgis geoservices rest api, geojson, html, csv, kmlAvailable download formats
    Dataset updated
    Oct 4, 2024
    Dataset authored and provided by
    ACT Government ACTMAPi
    Description

    Urban Habitat Connectivity Project (UHCP) Short description: A package of data containing potential habitat and fragmentation for seven species groups in the urban ACT. Each species group has two layer files. Connected habitat layers show potential core and corridor habitat for the species group, and connectivity/fragmentation between these habitat patches. Remnant patches layers contain areas which are predicted to be fragmented and inaccessible for the species group, but may be important for restoration activities. These layers are outputs of ecological connectivity modelling and have been developed using spatial data representing habitat and connectivity requirements specific to the species group. The following attributes are available in the data table for Connected Habitat layers:

    Species Group* - indicates the species group of interestPatch ID – a unique identifier for each ‘patch’ of connected habitat, an ID that is given to group all habitat areas which are predicted to be connected to each other.Habitat Type* – identifies if the polygon meets core or corridor habitat requirements, or if it is a remnant patch.Habitat Number – a numeric value linked to Habitat Type to support statistics and symbology. Core habitat has a value of 0 and corridor habitat has a value of 1.Patch Area (Ha)* – the area of the individual polygon in hectares.Connected Habitat Area (Ha) – the total area of potential habitat in the connected patch, determined by summing the Patch Area for all polygons with the same Patch ID.Shape area – the polygon’s area, calculated by default in meters squared.Shape length – the length of the line enclosing the polygon, calculated by default in meters squared.

    • Is also available in the data table for Remnant Patches layers. Spatial resolution: 1:10,000 Coordinate system: GDA2020 MGA zone 55 METHODS Data collection / creation: Spatial layers for habitat and barriers were created and input into a habitat connectivity/fragmentation model specifically designed for the species group. The model was developed using metrics derived from expert elicitation. These metrics quantified essential habitat and connectivity requirements for the species group, for example the preferred spacing of trees, the maximum crossable width of a road, the typical dispersal distance, etc. The model identified habitat and barriers to connectivity, based on the metrics which could be mapped. Habitat was delineated by patch size to determine core and corridor habitat, and to remove areas which are too small to be functional. The habitat type is visible in the attribute table of the data. Connectivity between habitat patches is dependent on the species group’s dispersal capacity and the availability of core habitat, suitable corridors and a path without barriers. To assess this core habitat areas were buffered by the species group’s dispersal distance. This identified how far an individual will move to find a new core habitat patch. Movement to this distance is dependent on a suitable path. All habitat was buffered by the distance the species can move outside habitat (through non-habitat areas). This identified how far an individual will move outside any habitat (core or corridor) before they require another habitat patch (i.e. how far they can travel between stepping stones).Connectivity is further complicated by impassable barriers. Barriers were used to slice up the dispersal buffers and identify ‘dispersal patches’, areas which an individual can move within. Fragmentation is seen when a barrier is present, patches are too far from core habitat, or corridor habitat is too far apart. A unique ID was applied to each patch and represents connectivity/fragmentation. The patches were intersected with habitat to apply the new ID to the habitat areas. The final model outputs identify areas of potential core, corridor or remnant (inaccessible) habitat. Core and corridor habitat are viewable in the connected habitat dataset, whilst remnant patches are available separately. The data was simplified using the Douglas-Peuker algorithm, a tolerance of 0.5-2m, minimum size of 2-5m2 for retention, and holes filled in if less than 20m2. Small adjoining slithers <20m2 were dissolved into neighbouring polygons to optimise drawing speeds. Please contact the project team for the model script or further details on the methodology. NOTES ON USE Quality: The habitat connectivity modelling used to produce the data was informed by work by the City of Melbourne (Kirk et al., 2018). The original methods were expanded on, with habitat and connectivity requirements (metrics) specific to the species group determined from expert elicitation and further analysis to consider patch size for core or corridor patches. The expert elicitation process provided the best and most relevant quantitative description of habitat and barriers available (for a species group rather than a specific species). The input datasets were then tailored to the metrics for this project. Existing datasets were refined to be relevant and reflect the metrics identified through expert elicitation. New datasets were created where data was missing. All data was derived from existing authoritative sources and/or remotely sensed data. This data curation process ensured the input datasets, and resulting output, were relevant and fit for purpose. Limitations: This data should be considered indicative only as there are limitations to the modelling process. It considers all habitat and barriers equally and as discrete objects (i.e. it applies a discrete boundary around a patch and does not account for gradients or flexible boundaries).The model predicts habitat and connectivity based on the data available. It does not assess whether a species is present or consider temporal variability. Some habitat requirements are not mapped (e.g. native vegetation, lack of predators) due to the lack of an accurate or complete dataset. Some of these requirements are critical to the success of the species group. These habitat requirements are available and have been derived from expert elicitation. They should be considered at an area of interest. The model assumes the input data is up to date and accurate. Many of the habitat and barrier datasets used as inputs into the models are in some way informed by remote sensing data. Remote sensing data has limitations, such as potential for misclassification (e.g. bare ground and pavement could be confused). Additionally, remotely sensed data captures a point in time and will become outdated. Manual checks and improvements using supplementary data for specific sites have been completed to reduce as much error as possible. Data refinement: Unmapped habitat and connectivity requirements should be considered when using the data. The full list of known habitat and connectivity requirements for each species group, including those considered by the model and those unaccounted for, is available by request. Other data may also be used to track changes post-LiDAR capture. For example, new development footprints may be used to remove non-habitat areas and can be done so at a faster rate than waiting for new LiDAR captures and re-running the model.

    SHARING Licenses/restrictions on use: Creative Commons By Attribution 4.0 (Australian Capital Territory) How to cite this data: ACT Government, 2023. Potential Habitat and Fragmentation in Urban ACT dataset, version 3. Polygon layer developed by the Office of Nature Conservation, Environment, Planning and Sustainable Development Directorate, Canberra. CONTACT For accessibility issues or data enquiries please contact the Connecting Nature, Connecting People team cncp@act.gov.au.

  14. BLM OR Structures Line Hub

    • catalog.data.gov
    • gbp-blm-egis.hub.arcgis.com
    Updated May 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bureau of Land Management (2025). BLM OR Structures Line Hub [Dataset]. https://catalog.data.gov/dataset/blm-or-structures-line-hub
    Explore at:
    Dataset updated
    May 18, 2025
    Dataset provided by
    Bureau of Land Managementhttp://www.blm.gov/
    Description

    STRCT_ARC: Structures are discrete, physically existing things that are built. Structures are things created to support treatment, recreation or other management activities. STRCT_ARC contains, but is not limited to, features such as pipelines, fences, and trails.

  15. Simantha: Simulation for Manufacturing

    • catalog.data.gov
    • data.nist.gov
    Updated Jul 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2022). Simantha: Simulation for Manufacturing [Dataset]. https://catalog.data.gov/dataset/simantha-simulation-for-manufacturing
    Explore at:
    Dataset updated
    Jul 29, 2022
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    Simantha is a discrete event simulation package written in Python that is designed to model the behavior of discrete manufacturing systems. Specifically, it focuses on asynchronous production lines with finite buffers. It also provides functionality for modeling the degradation and maintenance of machines in these systems. Classes for five basic manufacturing objects are included: source, machine, buffer, sink, and maintainer. These objects can be defined by the user and configured in different ways to model various real-world manufacturing systems. The object classes are also designed to be extensible so that they can be used to model more complex processes.In addition to modeling the behavior of existing systems, Simantha is also intended for use with simulation-based optimization and planning applications. For instance, users may be interested in evaluating alternative maintenance policies for a particular system. Estimating the expected system performance under each candidate policy will require a large number of simulation replications when the system is subject to a high degree of stochasticity. Simantha therefore supports parallel simulation replications to make this procedure more efficient.Github repository: https://github.com/usnistgov/simantha

  16. a

    Fish Remnant Patches

    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • hub.arcgis.com
    • +2more
    Updated Nov 21, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Capital Territory Government (2023). Fish Remnant Patches [Dataset]. https://arc-gis-hub-home-arcgishub.hub.arcgis.com/datasets/ACTGOV::actgov-uhcp-urban-habitat-connectivity-fragmentation-?layer=2
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset authored and provided by
    Australian Capital Territory Government
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Urban Habitat Connectivity Project (UHCP)Short description: A package of data containing potential habitat and fragmentation for seven species groups in the urban ACT. Each species group has two layer files. Connected habitat layers show potential core and corridor habitat for the species group, and connectivity/fragmentation between these habitat patches. Remnant patches layers contain areas which are predicted to be fragmented and inaccessible for the species group, but may be important for restoration activities. These layers are outputs of ecological connectivity modelling and have been developed using spatial data representing habitat and connectivity requirements specific to the species group.The following attributes are available in the data table for Connected Habitat layers:Species Group* - indicates the species group of interestPatch ID – a unique identifier for each ‘patch’ of connected habitat, an ID that is given to group all habitat areas which are predicted to be connected to each other.Habitat Type* – identifies if the polygon meets core or corridor habitat requirements, or if it is a remnant patch.Habitat Number – a numeric value linked to Habitat Type to support statistics and symbology. Core habitat has a value of 0 and corridor habitat has a value of 1.Patch Area (Ha)* – the area of the individual polygon in hectares.Connected Habitat Area (Ha) – the total area of potential habitat in the connected patch, determined by summing the Patch Area for all polygons with the same Patch ID.Shape area – the polygon’s area, calculated by default in meters squared.Shape length – the length of the line enclosing the polygon, calculated by default in meters squared.* Is also available in the data table for Remnant Patches layers.Spatial resolution: 1:10,000Coordinate system: GDA2020 MGA zone 55METHODSData collection / creation: Spatial layers for habitat and barriers were created and input into a habitat connectivity/fragmentation model specifically designed for the species group. The model was developed using metrics derived from expert elicitation. These metrics quantified essential habitat and connectivity requirements for the species group, for example the preferred spacing of trees, the maximum crossable width of a road, the typical dispersal distance, etc. The model identified habitat and barriers to connectivity, based on the metrics which could be mapped. Habitat was delineated by patch size to determine core and corridor habitat, and to remove areas which are too small to be functional. The habitat type is visible in the attribute table of the data.Connectivity between habitat patches is dependent on the species group’s dispersal capacity and the availability of core habitat, suitable corridors and a path without barriers. To assess this core habitat areas were buffered by the species group’s dispersal distance. This identified how far an individual will move to find a new core habitat patch. Movement to this distance is dependent on a suitable path. All habitat was buffered by the distance the species can move outside habitat (through non-habitat areas). This identified how far an individual will move outside any habitat (core or corridor) before they require another habitat patch (i.e. how far they can travel between stepping stones).Connectivity is further complicated by impassable barriers. Barriers were used to slice up the dispersal buffers and identify ‘dispersal patches’, areas which an individual can move within. Fragmentation is seen when a barrier is present, patches are too far from core habitat, or corridor habitat is too far apart.A unique ID was applied to each patch and represents connectivity/fragmentation. The patches were intersected with habitat to apply the new ID to the habitat areas. The final model outputs identify areas of potential core, corridor or remnant (inaccessible) habitat. Core and corridor habitat are viewable in the connected habitat dataset, whilst remnant patches are available separately. The data was simplified using the Douglas-Peuker algorithm, a tolerance of 0.5-2m, minimum size of 2-5m2 for retention, and holes filled in if less than 20m2. Small adjoining slithers <20m2 were dissolved into neighbouring polygons to optimise drawing speeds. Please contact the project team for the model script or further details on the methodology.NOTES ON USEQuality: The habitat connectivity modelling used to produce the data was informed by work by the City of Melbourne (Kirk et al., 2018). The original methods were expanded on, with habitat and connectivity requirements (metrics) specific to the species group determined from expert elicitation and further analysis to consider patch size for core or corridor patches. The expert elicitation process provided the best and most relevant quantitative description of habitat and barriers available (for a species group rather than a specific species). The input datasets were then tailored to the metrics for this project. Existing datasets were refined to be relevant and reflect the metrics identified through expert elicitation. New datasets were created where data was missing. All data was derived from existing authoritative sources and/or remotely sensed data. This data curation process ensured the input datasets, and resulting output, were relevant and fit for purpose.Limitations: This data should be considered indicative only as there are limitations to the modelling process. It considers all habitat and barriers equally and as discrete objects (i.e. it applies a discrete boundary around a patch and does not account for gradients or flexible boundaries).The model predicts habitat and connectivity based on the data available. It does not assess whether a species is present or consider temporal variability. Some habitat requirements are not mapped (e.g. native vegetation, lack of predators) due to the lack of an accurate or complete dataset. Some of these requirements are critical to the success of the species group. These habitat requirements are available and have been derived from expert elicitation. They should be considered at an area of interest.The model assumes the input data is up to date and accurate. Many of the habitat and barrier datasets used as inputs into the models are in some way informed by remote sensing data. Remote sensing data has limitations, such as potential for misclassification (e.g. bare ground and pavement could be confused). Additionally, remotely sensed data captures a point in time and will become outdated. Manual checks and improvements using supplementary data for specific sites have been completed to reduce as much error as possible.Data refinement: Unmapped habitat and connectivity requirements should be considered when using the data. The full list of known habitat and connectivity requirements for each species group, including those considered by the model and those unaccounted for, is available by request. Other data may also be used to track changes post-LiDAR capture. For example, new development footprints may be used to remove non-habitat areas and can be done so at a faster rate than waiting for new LiDAR captures and re-running the model.SHARINGLicenses/restrictions on use: Creative Commons By Attribution 4.0 (Australian Capital Territory)How to cite this data: ACT Government, 2023. Potential Habitat and Fragmentation in Urban ACT dataset, version 3. Polygon layer developed by the Office of Nature Conservation, Environment, Planning and Sustainable Development Directorate, Canberra.CONTACTFor accessibility issues or data enquiries please contact the Connecting Nature, Connecting People team cncp@act.gov.au.

  17. a

    ACTGOV UHCP Urban Habitat Connectivity - Fragmentation

    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • devweb.dga.links.com.au
    • +1more
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Capital Territory Government (2023). ACTGOV UHCP Urban Habitat Connectivity - Fragmentation [Dataset]. https://arc-gis-hub-home-arcgishub.hub.arcgis.com/maps/ca3dcb6f58e94f93b16d4df882bb3c01
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset authored and provided by
    Australian Capital Territory Government
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Urban Habitat Connectivity Project (UHCP)Short description: A package of data containing potential habitat and fragmentation for seven species groups in the urban ACT. Each species group has two layer files. Connected habitat layers show potential core and corridor habitat for the species group, and connectivity/fragmentation between these habitat patches. Remnant patches layers contain areas which are predicted to be fragmented and inaccessible for the species group, but may be important for restoration activities. These layers are outputs of ecological connectivity modelling and have been developed using spatial data representing habitat and connectivity requirements specific to the species group.The following attributes are available in the data table for Connected Habitat layers:Species Group* - indicates the species group of interestPatch ID – a unique identifier for each ‘patch’ of connected habitat, an ID that is given to group all habitat areas which are predicted to be connected to each other.Habitat Type* – identifies if the polygon meets core or corridor habitat requirements, or if it is a remnant patch.Habitat Number – a numeric value linked to Habitat Type to support statistics and symbology. Core habitat has a value of 0 and corridor habitat has a value of 1.Patch Area (Ha)* – the area of the individual polygon in hectares.Connected Habitat Area (Ha) – the total area of potential habitat in the connected patch, determined by summing the Patch Area for all polygons with the same Patch ID.Shape area – the polygon’s area, calculated by default in meters squared.Shape length – the length of the line enclosing the polygon, calculated by default in meters squared.* Is also available in the data table for Remnant Patches layers.Spatial resolution: 1:10,000Coordinate system: GDA2020 MGA zone 55METHODSData collection / creation: Spatial layers for habitat and barriers were created and input into a habitat connectivity/fragmentation model specifically designed for the species group. The model was developed using metrics derived from expert elicitation. These metrics quantified essential habitat and connectivity requirements for the species group, for example the preferred spacing of trees, the maximum crossable width of a road, the typical dispersal distance, etc. The model identified habitat and barriers to connectivity, based on the metrics which could be mapped. Habitat was delineated by patch size to determine core and corridor habitat, and to remove areas which are too small to be functional. The habitat type is visible in the attribute table of the data.Connectivity between habitat patches is dependent on the species group’s dispersal capacity and the availability of core habitat, suitable corridors and a path without barriers. To assess this core habitat areas were buffered by the species group’s dispersal distance. This identified how far an individual will move to find a new core habitat patch. Movement to this distance is dependent on a suitable path. All habitat was buffered by the distance the species can move outside habitat (through non-habitat areas). This identified how far an individual will move outside any habitat (core or corridor) before they require another habitat patch (i.e. how far they can travel between stepping stones).Connectivity is further complicated by impassable barriers. Barriers were used to slice up the dispersal buffers and identify ‘dispersal patches’, areas which an individual can move within. Fragmentation is seen when a barrier is present, patches are too far from core habitat, or corridor habitat is too far apart.A unique ID was applied to each patch and represents connectivity/fragmentation. The patches were intersected with habitat to apply the new ID to the habitat areas. The final model outputs identify areas of potential core, corridor or remnant (inaccessible) habitat. Core and corridor habitat are viewable in the connected habitat dataset, whilst remnant patches are available separately. The data was simplified using the Douglas-Peuker algorithm, a tolerance of 0.5-2m, minimum size of 2-5m2 for retention, and holes filled in if less than 20m2. Small adjoining slithers <20m2 were dissolved into neighbouring polygons to optimise drawing speeds. Please contact the project team for the model script or further details on the methodology.NOTES ON USEQuality: The habitat connectivity modelling used to produce the data was informed by work by the City of Melbourne (Kirk et al., 2018). The original methods were expanded on, with habitat and connectivity requirements (metrics) specific to the species group determined from expert elicitation and further analysis to consider patch size for core or corridor patches. The expert elicitation process provided the best and most relevant quantitative description of habitat and barriers available (for a species group rather than a specific species). The input datasets were then tailored to the metrics for this project. Existing datasets were refined to be relevant and reflect the metrics identified through expert elicitation. New datasets were created where data was missing. All data was derived from existing authoritative sources and/or remotely sensed data. This data curation process ensured the input datasets, and resulting output, were relevant and fit for purpose.Limitations: This data should be considered indicative only as there are limitations to the modelling process. It considers all habitat and barriers equally and as discrete objects (i.e. it applies a discrete boundary around a patch and does not account for gradients or flexible boundaries).The model predicts habitat and connectivity based on the data available. It does not assess whether a species is present or consider temporal variability. Some habitat requirements are not mapped (e.g. native vegetation, lack of predators) due to the lack of an accurate or complete dataset. Some of these requirements are critical to the success of the species group. These habitat requirements are available and have been derived from expert elicitation. They should be considered at an area of interest.The model assumes the input data is up to date and accurate. Many of the habitat and barrier datasets used as inputs into the models are in some way informed by remote sensing data. Remote sensing data has limitations, such as potential for misclassification (e.g. bare ground and pavement could be confused). Additionally, remotely sensed data captures a point in time and will become outdated. Manual checks and improvements using supplementary data for specific sites have been completed to reduce as much error as possible.Data refinement: Unmapped habitat and connectivity requirements should be considered when using the data. The full list of known habitat and connectivity requirements for each species group, including those considered by the model and those unaccounted for, is available by request. Other data may also be used to track changes post-LiDAR capture. For example, new development footprints may be used to remove non-habitat areas and can be done so at a faster rate than waiting for new LiDAR captures and re-running the model.SHARINGLicenses/restrictions on use: Creative Commons By Attribution 4.0 (Australian Capital Territory)How to cite this data: ACT Government, 2023. Potential Habitat and Fragmentation in Urban ACT dataset, version 3. Polygon layer developed by the Office of Nature Conservation, Environment, Planning and Sustainable Development Directorate, Canberra.CONTACTFor accessibility issues or data enquiries please contact the Connecting Nature, Connecting People team cncp@act.gov.au.

  18. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Bolei Zhou; Hang Zhao; Xavier Puig; Sanja Fidler; Adela Barriuso; Antonio Torralba (2019). ADE20K Dataset [Dataset]. https://paperswithcode.com/dataset/ade20k

ADE20K Dataset

Explore at:
Dataset updated
Jan 9, 2019
Authors
Bolei Zhou; Hang Zhao; Xavier Puig; Sanja Fidler; Adela Barriuso; Antonio Torralba
Description

The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.

Search
Clear search
Close search
Google apps
Main menu