100+ datasets found
  1. BLM CO VRI Visual Distance Zone Polygons

    • catalog.data.gov
    Updated Nov 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bureau of Land Management (2024). BLM CO VRI Visual Distance Zone Polygons [Dataset]. https://catalog.data.gov/dataset/blm-co-vri-visual-distance-zone-polygons
    Explore at:
    Dataset updated
    Nov 20, 2024
    Dataset provided by
    Bureau of Land Managementhttp://www.blm.gov/
    Description

    BLM's Visual Resource Management system provides a way to identify and evaluate scenic values to determine the appropriate levels of management. It also provides a way to analyze potential visual impacts and apply visual design techniques to ensure that surface-disturbing activities are in harmony with their surroundings. This is a two stage process: Inventory, Visual Resource Inventory (VRI) and Analysis, Visual Resource Contrast Rating.

  2. P

    Urban Visual Pollution Dataset Dataset

    • paperswithcode.com
    Updated Apr 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Urban Visual Pollution Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/urban-visual-pollution-dataset
    Explore at:
    Dataset updated
    Apr 27, 2025
    Description

    👉 Download the dataset here

    Description:

    The Urban Visual Pollution Dataset is designed for the detection and evaluation of various visual pollutants present in urban environments. This dataset comprises street-level imagery captured by cameras mounted on moving vehicles, offering a comprehensive view of visual pollution in a specific urban area. As visual pollution becomes an increasingly recognized issue, this dataset provides a foundation for pioneering research and development in environmental management and urban planning.

    Objective

    The primary goal of this dataset is to support the development of automated systems for visual pollution classification. By leveraging convolutional neural networks (CNNs), researchers and developers can simulate human-like image recognition capabilities to identify and classify different types of visual pollutants. This work is essential for creating a "visual pollution score/index," a new metric that could become integral to urban environmental management. The dataset not only fosters innovation in Al and computer vision but also contributes to the broader understanding and mitigation of urban visual pollution.

    Download Dataset

    Visual Pollution Types

    The dataset covers a wide range of visual pollution categories, reflecting the diverse challenges faced by urban environments. These include:

    Graffiti: Unauthorized art or vandalism on public or private property.

    Faded Signage: Deteriorating signs that contribute to a neglected appearance.

    Potholes: Surface depressions in roadways that can cause vehicle damage and accidents.

    Garbage: Litter and improperly disposed waste in public areas.

    Construction Road: Temporary or abandoned construction sites that disrupt the urban landscape.

    Broken Signage: Damaged signs that may pose safety hazards and detract from the urban environment.

    Bad Streetlight: Faulty or insufficient street lighting that affects visibility and safety.

    Bad Billboard: Deteriorated or poorly maintained billboards that contribute to visual clutter.

    Sand on Road: Accumulations of sand or debris that can obscure road markings and pose driving hazards.

    Cluttered Sidewalk: Overcrowded pedestrian pathways with obstacles such as street vendors, debris, or parked vehicles.

    Unkept Facade: Building exteriors that are poorly maintained, contributing to a dilapidated urban appearance.

    Dataset Composition

    This dataset is composed of raw sensor camera inputs collected by a fleet of vehicles operating within a restricted geographic area in the Kingdom of Saudi Arabia (KSA). The imagery captures a wide array of urban scenes under different lighting and weather conditions, providing a robust dataset for training and testing machine learning models.

    Applications and Use Cases

    Automated Visual Pollution Detection: Training Al models to automatically identify and categorize visual pollutants in urban environments.

    Urban Environmental Management: Developing tools to assess and mitigate visual pollution, leading to better urban planning and policy-making.

    Public Awareness and Engagement: Creating platforms to raise awareness about visual pollution and encourage community-driven efforts to improve urban aesthetics.

    Safety and Maintenance: Enhancing urban safety by identifying and addressing hazards like potholes, broken signage, and bad street lighting.

    Potential Impact

    The Urban Visual Pollution Dataset is poised to play a crucial role in shaping the future of urban environmental management. By enabling the development of sophisticated tools for detecting and evaluating visual pollution, this dataset supports efforts to create cleaner, safer, and more aesthetically pleasing urban spaces. The introduction of a visual pollution index could become a standard metric in urban planning, guiding interventions and policies to improve the quality of life in cities worldwide.

    Future Directions

    Future research could expand this dataset to include more geographic areas, different urban environments, and additional types of visual pollutants. There is also potential for integrating this dataset with other environmental data, such as air and noise pollution, to develop comprehensive urban health indices.

    Conclusion

    The Urban Visual Pollution Dataset is a critical resource for advancing the field of urban environmental management. By providing high-quality, diverse data, it empowers researchers and practitioners to address the growing challenge of visual pollution in cities, ultimately contributing to the development of more livable urban environments.

    This dataset is sourced from Kaggle.

  3. s

    Long-range Pedestrian Dataset

    • shaip.com
    • tl.shaip.com
    • +1more
    json
    Updated Nov 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Long-range Pedestrian Dataset [Dataset]. https://www.shaip.com/offerings/human-animal-segmentation-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Nov 26, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Long-range Pedestrian Dataset is curated for the visual entertainment sector, featuring a collection of outdoor-collected images with a high resolution of 3840 x 2160 pixels. This dataset is focused on long-distance pedestrian imagery, with each target pedestrian precisely labeled with a bounding box that closely fits the boundary of the pedestrian target, providing detailed data for scene composition and character placement in visual content.

  4. NOS CO-OPS Meteorological Data, Visibility, 6-Minute

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Jun 10, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA NOS COOPS (Point of Contact) (2023). NOS CO-OPS Meteorological Data, Visibility, 6-Minute [Dataset]. https://catalog.data.gov/dataset/nos-co-ops-meteorological-data-visibility-6-minute
    Explore at:
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Description

    This dataset has Visibility data from NOAA NOS Center for Operational Oceanographic Products and Services (CO-OPS). WARNING: These preliminary data have not been subjected to the National Ocean Services (NOS) Quality Control procedures, and do not necessarily meet the criteria and standards of official NOS data. They are released for limited public use with appropriate caution. WARNING: * Queries for data MUST include stationID= and time>=. * Queries USUALLY include time<= (the default end time corresponds to 'now'). * Queries MUST be for less than 30 days worth of data. * The data source isn't completely reliable. If your request returns no data when you think it should: * Try revising the request (e.g., a different time range). * The list of stations offering this data may be incorrect. * Sometimes a station or the entire data service is unavailable. Wait a while and try again.

  5. National Hydrography Dataset Plus Version 2.1

    • hub.arcgis.com
    • owdp-geo.hub.arcgis.com
    • +2more
    Updated Aug 16, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2022). National Hydrography Dataset Plus Version 2.1 [Dataset]. https://hub.arcgis.com/maps/4bd9b6892530404abfe13645fcb5099a
    Explore at:
    Dataset updated
    Aug 16, 2022
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    The National Hydrography Dataset Plus (NHDplus) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US EPA Office of Water and the US Geological Survey, the NHDPlus provides mean annual and monthly flow estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses. For more information on the NHDPlus dataset see the NHDPlus v2 User Guide.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territories not including Alaska.Geographic Extent: The United States not including Alaska, Puerto Rico, Guam, US Virgin Islands, Marshall Islands, Northern Marianas Islands, Palau, Federated States of Micronesia, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: EPA and USGSUpdate Frequency: There is new new data since this 2019 version, so no updates planned in the futurePublication Date: March 13, 2019Prior to publication, the NHDPlus network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the NHDPlus Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, On or Off Network (flowlines only), Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original NHDPlus dataset. No data values -9999 and -9998 were converted to Null values for many of the flowline fields.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but a vector tile layer created from the same data can be used at smaller scales to produce a webmap that displays across the full range of scales. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute. Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map. Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.

  6. e

    Esri ArcGIS Server GEOPROCESSING SERVICE Esri ArcGIS Server - Visibility DMR...

    • data.europa.eu
    • gimi9.com
    esri_gp
    Updated Nov 1, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Esri ArcGIS Server GEOPROCESSING SERVICE Esri ArcGIS Server - Visibility DMR 5G [Dataset]. https://data.europa.eu/data/datasets/cz-cuzk-gp_vis-dmr5g
    Explore at:
    esri_gpAvailable download formats
    Dataset updated
    Nov 1, 2016
    Description

    Geoprocessing service Esri ArcGIS Server - Visibility_DMR 5G is a public service intended for visibility analysis execution using the dataset Digital Terrain Model of the Czech Republic of the 5th generation (DMR 5G). Geoprocessing service enables to find out, which area is visible from chosen observer location to defined distance. When using the service is necessary to choose the observer location, pecify oberver offset above the terrain and define the distance, in which the visibility analysis is demanded. The result of the analysis is visibility field (area) represented by polygons, which delimit visible parts of the terrain.

    The geoprocessing service is published as asynchronous. The result is passed on client throught Result Map Service Visibility_DMR 5G (MapService). The result can be downloaded from server and saved to a local disc as shapefile using URL, which is generated and sended by the geoprocessing service. URL for the result download throught a web client is published in running service record, that is sent from server to the client.

  7. d

    Meteorological Data (including visibility)

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Jun 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact, Custodian) (2025). Meteorological Data (including visibility) [Dataset]. https://catalog.data.gov/dataset/meteorological-data-including-visibility1
    Explore at:
    Dataset updated
    Jun 3, 2025
    Dataset provided by
    (Point of Contact, Custodian)
    Description

    The National Ocean Service (NOS) maintains a long-term database containing data from active and historic stations installed all over the United States and U.S. territories. Since the 1990s, NOAA's Center for Operational Oceanographic Products and Services (CO-OPS) has been collecting various meteorological data along the U.S. coastline, around the Great Lakes and connecting channels, as well as in various U.S. territories. Stations are configured for a variety of observation periods, depending upon the location. Some of these sensors are located with water level stations, while others are independent, dedicated meteorological stations. These data are used to support a variety of purposes including but not limited to safe and efficient marine navigation and coastal hazards monitoring. The standard reporting time for collecting data are in 6-minute intervals.

  8. f

    Driving the visibility of research through the curation of datasets at a...

    • figshare.com
    Updated Apr 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tinyiko Dube (2025). Driving the visibility of research through the curation of datasets at a Comprehensive Open Distance e-Learning (CODeL) institution in the global South [Dataset]. http://doi.org/10.25399/UnisaData.28877843.v1
    Explore at:
    Dataset updated
    Apr 28, 2025
    Dataset provided by
    University of South Africa
    Authors
    Tinyiko Dube
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Structured interviews were conducted with researchers from the Comprehensive Open Distance eLearning institutions to examine current data curation practices. The study aimed to identify strategies that improve the discoverability and accessibility of research data submitted in the research data repository. The visibility of research output is crucial for academic recognition and the advancement of knowledge, as well as for complying with funder requirements to make provisions for data reuse and enable actionable and socially beneficial open science from publicly funded research projects. The visibility of research output is crucial for academic recognition and the advancement of knowledge, as well as for complying with funder requirements to make provisions for data reuse and enable actionable and socially beneficial open science from publicly funded research projects.

  9. e

    Esri ArcGIS Server GEOPROCESSING SERVICE Esri ArcGIS Server - Visibility DMP...

    • data.europa.eu
    esri_gp
    Updated Nov 2, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Esri ArcGIS Server GEOPROCESSING SERVICE Esri ArcGIS Server - Visibility DMP 1G [Dataset]. https://data.europa.eu/data/datasets/cz-cuzk-gp_vis-dmp1g
    Explore at:
    esri_gpAvailable download formats
    Dataset updated
    Nov 2, 2016
    Description

    Geoprocessing service Esri ArcGIS Server - Visibility_DMP 1G is a public service intended for visibility analysis execution using the dataset Digital Surface Model of the Czech Republic of the 1st generation (DMP 1G). Geoprocessing service enables to find out, which area is visible from chosen observer location to defined distance. When using the service is necessary to choose the observer location, specify oberver offset above the terrain and define the distance, in which the visibility analysis is demanded. The result of the analysis is visibility field (area) represented by polygons, which delimit visible parts of the terrain.

    The geoprocessing service is published as asynchronous. The result is passed on client throught Result Map Service Visibility_DMP 1G (MapService). The result can be downloaded from server and saved to a local disc as shapefile using URL, which is generated and sended by the geoprocessing service. URL for the result download throught a web client is published in running service record, that is sent from server to the client.

  10. a

    Township Range

    • hub.arcgis.com
    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • +1more
    Updated May 13, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yavapai County ArcGIS Organization (2022). Township Range [Dataset]. https://hub.arcgis.com/datasets/YavGIS::plss-land-ordinance?layer=2
    Explore at:
    Dataset updated
    May 13, 2022
    Dataset authored and provided by
    Yavapai County ArcGIS Organization
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Intended for web map display in Portal web maps, web applications, and use in ArcGIS Pro. Source of feature class is yavgis.MISSDEADM.Townships from the production enterprise database. Published in Central AZ State Plane Coordinate System. No definition queries. Visibility range is 1:2,000,000.

  11. Data from: A multi-subject and multi-session EEG dataset for modelling human...

    • openneuro.org
    Updated Jun 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu (2025). A multi-subject and multi-session EEG dataset for modelling human visual object recognition [Dataset]. http://doi.org/10.18112/openneuro.ds005589.v1.0.3
    Explore at:
    Dataset updated
    Jun 7, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    This multi-subject and multi-session EEG dataset for modelling human visual object recognition (MSS) contains:

    1. 122-channel EEG data collected on 32 participants during natural visual stimulation;
    2. totally 100 sessions for 1.5 hours each;
    3. each session consists of 4 RSVP runs and 4 low-speed presentation runs;
    4. each participant completed between 1 to 5 sessions on different days, around one week apart.

    More details about the dataset are described as follows.

    Participants

    32 participants were recruited from college students in Beijing, of which 4 were female, and 28 were male, with an age range of 21-33 years. 100 sessions were conducted. They were paid and gave written informed consent. The study was conducted under the approval of the ethical committee of the Institute of Automation at the Chinese Academy of Sciences, with the approval number: IA21-2410-020201.

    Experimental Procedures

    1. RSVP experiment: During the RSVP experiment, the participants were shown images at a rate of 5 Hz, and each run consisted of 2,000 trials. There were 20 image categories, with 100 images in each category, making up the 2,000 stimuli. The 100 images in each category were further divided into five image sequences, resulting in 100 image sequences per run. Each sequence was composed of 20 images from the same class, and the 100 sequences were presented in a pseudo-random order.

    After every 50 sequences, there was a break for the participants to rest. Each rapid serial sequence lasted approximately 7.5 seconds, starting with a 750ms blank screen with a white fixation cross, followed by 20 or 21 images presented at 5 Hz with a 50% duty cycle. The sequence ended with another 750ms blank screen.

    After the rapid serial sequence, there was a 2-second interval during which participants were instructed to blink and then report whether a special image appeared in the sequence using a keyboard. During each run, 20 sequences were randomly inserted with additional special images at random positions. The special images are logos for brain-computer interfaces.

    1. Low-speed experiment: During the low-speed experiment, each run consisted of 100 trials, with 1 second per image for a slower paradigm. The 100 stimuli were presented in a pseudo-random order and included 20 image categories, each containing 5 images. A break was given to the participants after every 20 images for them to rest.

    Each image was displayed for 1 second and was followed by 11 choice boxes (1 correct class box, 9 random class boxes, and 1 reject box). Participants were required to select the correct class of the displayed image using a mouse to increase their engagement. After the selection, a white fixation cross was displayed for 1 second in the centre of the screen to remind participants to pay attention to the upcoming task.

    Stimuli

    The stimuli are from two image databases, ImageNet and PASCAL. The final set consists of 10,000 images, with 500 images for each class.

    Annotations

    In the derivatives/annotations folder, there are additional information of MSS:

    1. Videos of two paradigms.
    2. Dataset_info: Main features of MSS.
    3. Experiment_schedule: Schedule of each session.
    4. Stimuli_source: Source categories of ImageNet and PASCAL.
    5. Subject_info: Age and sex of participants.
    6. Task_event: The meaning of eventID.

    Preprocessing

    The EEG signals were pre-processed using the MNE package, version 1.3.1, with Python 3.9.16. The data was sampled at a rate of 1,000 Hz with a bandpass filter applied between 0.1 and 100 Hz. A notch filter was used to remove 50 Hz power frequency. Epochs were created for each trial ranging from 0 to 500 ms relative to stimulus onset. No further preprocessing or artefact correction methods were applied in technical validation. However, researchers may want to consider widely used preprocessing steps such as baseline correction or eye movement correction. After the preprocessing, each session resulted in two matrices: RSVP EEG data matrix of shape (8,000 image conditions × 122 EEG channels × 125 EEG time points) and low-speed EEG data matrix of shape (400 image conditions × 122 EEG channels × 125 EEG time points).

  12. P

    MERL-RAV Dataset

    • paperswithcode.com
    Updated Mar 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abhinav Kumar; Tim K. Marks; Wenxuan Mou; Ye Wang; Michael Jones; Anoop Cherian; Toshiaki Koike-Akino; Xiaoming Liu; Chen Feng (2025). MERL-RAV Dataset [Dataset]. https://paperswithcode.com/dataset/merl-rav-dataset
    Explore at:
    Dataset updated
    Mar 27, 2025
    Authors
    Abhinav Kumar; Tim K. Marks; Wenxuan Mou; Ye Wang; Michael Jones; Anoop Cherian; Toshiaki Koike-Akino; Xiaoming Liu; Chen Feng
    Description

    The MERL-RAV (MERL Reannotation of AFLW with Visibility) Dataset contains over 19,000 face images in a full range of head poses. Each face is manually labeled with the ground-truth locations of 68 landmarks, with the additional information of whether each landmark is unoccluded, self-occluded (due to extreme head poses), or externally occluded. The images were annotated by professional labelers, supervised by researchers at Mitsubishi Electric Research Laboratories (MERL).

  13. e

    Esri ArcGIS Server GEOPROCESSING SERVICE Esri ArcGIS Server - Visibility DMR...

    • data.europa.eu
    esri_gp
    Updated Nov 2, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Esri ArcGIS Server GEOPROCESSING SERVICE Esri ArcGIS Server - Visibility DMR 4G [Dataset]. https://data.europa.eu/data/datasets/cz-cuzk-gp_vis-dmr4g
    Explore at:
    esri_gpAvailable download formats
    Dataset updated
    Nov 2, 2016
    Description

    Geoprocessing service Esri ArcGIS Server - Visibility_DMR 4G is a public service intended for visibility analysis execution using the dataset Digital Terrain Model of the Czech Republic of the 4th generation (DMR 4G). Geoprocessing service enables to find out, which area is visible from chosen observer location to defined distance. When using the service is necessary to choose the observer location, specify oberver offset above the terrain and define the distance, in which the visibility analysis is demanded. The result of the analysis is visibility field (area) represented by polygons, which delimit visible parts of the terrain.

    The geoprocessing service is published as asynchronous. The result is passed on client throught Result Map Service Visibility_DMR 4G (MapService). The result can be downloaded from server and saved to a local disc as shapefile using URL, which is generated and sended by the geoprocessing service. URL for the result download throught a web client is published in running service record, that is sent from server to the client.

  14. n

    Present weather sensor visibility and precipitation data from the Arctic...

    • catalog-intaros.nersc.no
    • portal-intaros.nersc.no
    Updated Aug 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). Present weather sensor visibility and precipitation data from the Arctic Ocean 2018 expedition [Dataset]. https://catalog-intaros.nersc.no/dataset/present-weather-sensor-arctic-ocean-2018-expedition
    Explore at:
    Dataset updated
    Aug 14, 2020
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Arctic Ocean
    Description

    Measurements of visibility, precipitation type and intensity, and the corresponding the World Meteorological Organization (WMO) weather codes. The dataset provides rare high quality meteorological observations from sea-ice regions of the Arctic Ocean. They enable analysis of meteorological conditions and provide context for other measurements and analysis associated with the expedition. The measurements are from the present weather sensor operating on Icebreaker Oden’s 7th deck at 25 m above sea level during the Arctic Ocean 2018 (AO2018, also referred to as MOCCHA-ACAS-ICE) expedition to the central Arctic Ocean in August and September 2018. Visibility, up to a maximum range of 20km, air temperature and precipitation type and intensity determined by the Vaisala PWD22 present weather sensor installed above Oden’s bridge on the 7th deck, at a height of 27m above sea level. The system was operated as part of the Stockholm University ACAS project. The system additionally reports instantaneous, 15-minute and 60-minute WMO present weather codes. Until 2018-08-13 20:10, the sensor was set to only report visibility. After this date, the full set of measurements were reported. Data from the system are combined into a cruise-length file. The data are time-averaged to both 1-minute and 30-minute intervals, to correspond with the micrometeorological averaging periods used for the mast sensors.

  15. DICOM converted images for the NLM-Visible-Human-Project collection

    • zenodo.org
    bin
    Updated Jun 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Clunie; David Clunie; William Clifford; David Pot; Ulrike Wagner; Keyvan Farahani; Erika Kim; Andrey Fedorov; Andrey Fedorov; William Clifford; David Pot; Ulrike Wagner; Keyvan Farahani; Erika Kim (2025). DICOM converted images for the NLM-Visible-Human-Project collection [Dataset]. http://doi.org/10.5281/zenodo.12690050
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 6, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    David Clunie; David Clunie; William Clifford; David Pot; Ulrike Wagner; Keyvan Farahani; Erika Kim; Andrey Fedorov; Andrey Fedorov; William Clifford; David Pot; Ulrike Wagner; Keyvan Farahani; Erika Kim
    License

    https://www.nlm.nih.gov/databases/download/terms_and_conditions.htmlhttps://www.nlm.nih.gov/databases/download/terms_and_conditions.html

    Description

    This dataset corresponds to a collection of images and/or image-derived data available from National Cancer Institute Imaging Data Commons (IDC) [1]. This dataset was converted into DICOM representation and ingested by the IDC team. You can explore and visualize the corresponding images using IDC Portal here: NLM-Visible-Human-Project. You can use the manifests included in this Zenodo record to download the content of the collection following the Download instructions below.

    Collection description

    The NLM Visible Human Project [2] has created publicly-available complete, anatomically detailed, three-dimensional representations of a human male body and a human female body. Specifically, the VHP provides a public-domain library of cross-sectional cryosection, CT, and MRI images obtained from one male cadaver and one female cadaver. The Visible Man data set was publicly released in 1994 and the Visible Woman in 1995.

    The data sets were designed to serve as (1) a reference for the study of human anatomy, (2) public-domain data for testing medical imaging algorithms, and (3) a test bed and model for the construction of network-accessible image libraries. The VHP data sets have been applied to a wide range of educational, diagnostic, treatment planning, virtual reality, artistic, mathematical, and industrial uses. About 4,000 licensees from 66 countries were authorized to access the datasets. As of 2019, a license is no longer required to access the VHP datasets.

    Courtesy of the U.S. National Library of Medicine. Release of this collection by IDC does not indicate or imply that NLM has endorsed its products/services/applications. Please see the Visible Human Project information page to learn more about the images and to obtain any supporting metadata for this collection. Note that this collection may not reflect the most current/accurate data available from NLM.

    Citation guidelines can be found on the National Library of Medicine Terms and Conditions information page.

    Files included

    A manifest file's name indicates the IDC data release in which a version of collection data was first introduced. For example, collection_id-idc_v8-aws.s5cmd corresponds to the contents of the collection_id collection introduced in IDC data release v8. If there is a subsequent version of this Zenodo page, it will indicate when a subsequent version of the corresponding collection was introduced.

    1. nlm_visible_human_project-idc_v15-aws.s5cmd: manifest of files available for download from public IDC Amazon Web Services buckets
    2. nlm_visible_human_project-idc_v15-gcs.s5cmd: manifest of files available for download from public IDC Google Cloud Storage buckets
    3. nlm_visible_human_project-idc_v15-dcf.dcf: Gen3 manifest (for details see https://learn.canceridc.dev/data/organization-of-data/guids-and-uuids)

    Note that manifest files that end in -aws.s5cmd reference files stored in Amazon Web Services (AWS) buckets, while -gcs.s5cmd reference files in Google Cloud Storage. The actual files are identical and are mirrored between AWS and GCP.

    Download instructions

    Each of the manifests include instructions in the header on how to download the included files.

    To download the files using .s5cmd manifests:

    1. install idc-index package: pip install --upgrade idc-index
    2. download the files referenced by manifests included in this dataset by passing the .s5cmd manifest file: idc download manifest.s5cmd.

    To download the files using .dcf manifest, see manifest header.

    Acknowledgments

    Imaging Data Commons team has been funded in whole or in part with Federal funds from the National Cancer Institute, National Institutes of Health, under Task Order No. HHSN26110071 under Contract No. HHSN261201500003l.

    References

    [1] Fedorov, A., Longabaugh, W. J. R., Pot, D., Clunie, D. A., Pieper, S. D., Gibbs, D. L., Bridge, C., Herrmann, M. D., Homeyer, A., Lewis, R., Aerts, H. J. W., Krishnaswamy, D., Thiriveedhi, V. K., Ciausu, C., Schacherer, D. P., Bontempi, D., Pihl, T., Wagner, U., Farahani, K., Kim, E. & Kikinis, R. National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence. RadioGraphics (2023). https://doi.org/10.1148/rg.230180

    [2] Spitzer, V., Ackerman, M. J., Scherzinger, A. L. & Whitlock, D. The visible human male: a technical report. J. Am. Med. Inform. Assoc. 3, 118–130 (1996). https://doi.org/10.1136/jamia.1996.96236280

  16. mil0001_EH5_6h_ALBEDO_VIS: surface albedo visible range

    • wdc-climate.de
    • cera-www.dkrz.de
    Updated Jan 10, 2007
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jungclaus, Johann (2007). mil0001_EH5_6h_ALBEDO_VIS: surface albedo visible range [Dataset]. https://www.wdc-climate.de/ui/entry?acronym=mil0001_EH5_6h_ALBEDO_VIS
    Explore at:
    Dataset updated
    Jan 10, 2007
    Dataset provided by
    World Data Centerhttp://www.icsu-wds.org/
    Authors
    Jungclaus, Johann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Variables measured
    surface_albedo (visible_range)
    Description

    not filled

  17. Viewsheds from Key Observation Points for Homestead NM of America National...

    • s.cnmilf.com
    • catalog.data.gov
    • +1more
    Updated Jun 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2024). Viewsheds from Key Observation Points for Homestead NM of America National Monument (HOME) [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/viewsheds-from-key-observation-points-for-homestead-nm-of-america-national-monument-home
    Explore at:
    Dataset updated
    Jun 5, 2024
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    United States
    Description

    Visibility viewsheds incorporate influences of distance from observer, object size and limits of human visual acuity to define the degree of visibility as a probability between 1 - 0. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to maximum and minimum visibility viewsheds. Average Visibility viewsheds can be used as a potential resource conflict screening tools as it relates to the Great Plains Wind Energy Programmatic Environmental Impact Statement. Data includes binary and composite viewsheds, and average, maximum, minimum, and composite visibility viewsheds for the NPS unit. Viewsheds have been derived using a 30m National Elevation Dataset (NED) digital elevation model. Additonal viewshed parameters: Observer Height (offset A) was set at 2 meters. A vertical development object height (offset B) was set at 110 meters, representing an average wind tower and associated blade height. A binary viewshed (1 visible, 0 not visible) was created for the defined NPS Unit specific Key Observation Points (KOP). A composite viewshed is the visibility of multiple viewsheds combined into one. A visible value in a composite viewshed implies that across all the combined binary viewsheds (one per key observation pointacross the nps unit in this case), at a minimum at least one of the sample points is visible. On a cell by cell basis throughout the study area of interest the numbers of visible sample points are recorded in the composite viewshed. Composite viewsheds are a quick way to synthesize multiple viewsheds into one layer, thus giving an efficient and cursory overview of potential visual resource effects. To summarize visibility viewsheds across numerous viewsheds, (e.g. multiple viewsheds per high priority segment) three visibility scenario summary viewsheds have been derived: 1) A maximum visibility scenario is evaluated using a "Products" visibility viewshed, which represents the probability that all sample points are visible. Maximum visibility viewsheds are derived by multiplying probability values per visibility viewshed. 2) A minimum visibility scenario is assessed using a "Fuzzy sum" visibility viewshed. Minimum visibility viewsheds represent the probability that one sample point is visible, and is derived by calculating the fuzzy sum value across the probability values per visibility viewsheds. 3) Lastly an average visibility scenario is created from an "Average" visibility calculation. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to the aforementioned maximum and minimum visibility viewsheds. Equations for the maximum, average and minimum visibility viewsheds are defined below: Maximum Visibility: Products Visibility =(p1*p2*pn...), Average Visibility: Average Visibility =((p1*p2*pn)/n), and Minimum Visibility: Fuzzy Sum Visibility =(1-((1-p1 )*(1-p2 )*(1-pn )* ...). Moving beyond a simplistic binary viewshed approach, visibility viewsheds define the degree of visibility as a probability between 1 - 0. Visibility viewsheds incorporate the influences of distance from observer, object size (solar energy towers, troughs, panels, etc.) and limits of human visual acuity to derive a fuzzy membership value. A fuzzy membership value is a probability of visibility ranging between 1 - 0, where a value of one implies that the object would be easily visible under most conditions and for most viewers, while a lower value represents reduced visibility. Visibility viewshed calculation is performed using the modified fuzzy viewshed equations (Ogburn D.E. 2006). Visibility viewsheds have been defined using: a foreground distance (b1) of 1 km, a visual arc threshold value of 1 minute (limit of 20/20 vision) which is used in the object width multiplier calculation, and an object width value of 10 meters.

  18. m

    CHL_Visual_Tactile_Dataset

    • data.mendeley.com
    Updated Jul 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuchang Xu (2024). CHL_Visual_Tactile_Dataset [Dataset]. http://doi.org/10.17632/j7pz7x4wmb.4
    Explore at:
    Dataset updated
    Jul 15, 2024
    Authors
    Shuchang Xu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains raw visual images, visualized tactile images along the X- and Z-axes and an Excel file that organize every sample and their correspondence in order. The tactile images are interpolated on the raw haptic signal to align with the visual images. Both the visual and tactile images have identical resolution of 620 X 410. The dataset consists of 743 records. Each record includes one visual image, two tactile images along the X and Z axes, and one defect segmentation image. Tactile image filenames ending with x and z denote X and Z components respectively.The samples in the dataset exhibit a wide range of colors and textures. Moreover, the dataset demonstrates the advantage of cross-modal data fusion. As a flexible material, leather may have defects on its surface and underside, which can be observed in the visual and tactile images, respectively. Combining visual and tactile images provides better information on the distribution of defects

  19. Viewsheds from Key Observation Points for Theodore Roosevelt National Park...

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Jun 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2024). Viewsheds from Key Observation Points for Theodore Roosevelt National Park (THRO) [Dataset]. https://catalog.data.gov/dataset/viewsheds-from-key-observation-points-for-theodore-roosevelt-national-park-thro
    Explore at:
    Dataset updated
    Jun 5, 2024
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Description

    Visibility viewsheds incorporate influences of distance from observer, object size and limits of human visual acuity to define the degree of visibility as a probability between 1 - 0. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to maximum and minimum visibility viewsheds. Average Visibility viewsheds can be used as a potential resource conflict screening tools as it relates to the Great Plains Wind Energy Programmatic Environmental Impact Statement. Data includes binary and composite viewsheds, and average, maximum, minimum, and composite visibility viewsheds for the NPS unit. Viewsheds have been derived using a 30m National Elevation Dataset (NED) digital elevation model. Additonal viewshed parameters: Observer Height (offset A) was set at 2 meters. A vertical development object height (offset B) was set at 110 meters, representing an average wind tower and associated blade height. A binary viewshed (1 visible, 0 not visible) was created for the defined NPS Unit specific Key Observation Points (KOP). A composite viewshed is the visibility of multiple viewsheds combined into one. A visible value in a composite viewshed implies that across all the combined binary viewsheds (one per key observation pointacross the nps unit in this case), at a minimum at least one of the sample points is visible. On a cell by cell basis throughout the study area of interest the numbers of visible sample points are recorded in the composite viewshed. Composite viewsheds are a quick way to synthesize multiple viewsheds into one layer, thus giving an efficient and cursory overview of potential visual resource effects. To summarize visibility viewsheds across numerous viewsheds, (e.g. multiple viewsheds per high priority segment) three visibility scenario summary viewsheds have been derived: 1) A maximum visibility scenario is evaluated using a "Products" visibility viewshed, which represents the probability that all sample points are visible. Maximum visibility viewsheds are derived by multiplying probability values per visibility viewshed. 2) A minimum visibility scenario is assessed using a "Fuzzy sum" visibility viewshed. Minimum visibility viewsheds represent the probability that one sample point is visible, and is derived by calculating the fuzzy sum value across the probability values per visibility viewsheds. 3) Lastly an average visibility scenario is created from an "Average" visibility calculation. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to the aforementioned maximum and minimum visibility viewsheds. Equations for the maximum, average and minimum visibility viewsheds are defined below: Maximum Visibility: Products Visibility =(p1*p2*pn...), Average Visibility: Average Visibility =((p1*p2*pn)/n), and Minimum Visibility: Fuzzy Sum Visibility =(1-((1-p1 )*(1-p2 )*(1-pn )* ...). Moving beyond a simplistic binary viewshed approach, visibility viewsheds define the degree of visibility as a probability between 1 - 0. Visibility viewsheds incorporate the influences of distance from observer, object size (solar energy towers, troughs, panels, etc.) and limits of human visual acuity to derive a fuzzy membership value. A fuzzy membership value is a probability of visibility ranging between 1 - 0, where a value of one implies that the object would be easily visible under most conditions and for most viewers, while a lower value represents reduced visibility. Visibility viewshed calculation is performed using the modified fuzzy viewshed equations (Ogburn D.E. 2006). Visibility viewsheds have been defined using: a foreground distance (b1) of 1 km, a visual arc threshold value of 1 minute (limit of 20/20 vision) which is used in the object width multiplier calculation, and an object width value of 10 meters.

  20. A dataset for robotic outdoor visual navigation with multiple passages...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marwen Belkaid; Marwen Belkaid; Nicolas Cuperlier; Nicolas Cuperlier; Philippe Gaussier; Philippe Gaussier (2020). A dataset for robotic outdoor visual navigation with multiple passages through trajectory segments [Dataset]. http://doi.org/10.5281/zenodo.1168075
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Marwen Belkaid; Marwen Belkaid; Nicolas Cuperlier; Nicolas Cuperlier; Philippe Gaussier; Philippe Gaussier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The images were captured by a fisheye camera and a magnetic compass was used to acquire the orientation data. The datasets are split in two folders:
    1) LEARN: In order to learn a new place, the robot camera captures 15 images over a 360 degrees panorama. During this process, the robot stays still in order to avoid distortions in the representation of the place.
    2) EXPLO: When exploring the environment (i.e. the rest of the time), the robot only captures 7 images per panorama, for the purpose of faster place recognition. Images are captured while the robot is moving. Various exploration panoramas are recorded around the trajectory performed in the learning panoramas (see traj.pdf).

    The average distance between two learning panoramas is 0.93 +/- 0.03 meters
    The average distance traveled during an exploration panoramas is 0.71 +/- 0.01 meters

    DATASET A
    ---------
    - 20 meters long
    - 22 learning panoramas (i.e. sets of 15 images captured while robot is stopped)
    - 5 exploration trajectories
    - A_on_learned: 29 exploration panoramas (i.e. sets of 7 images captured while robot is moving)
    - A_parallel: 29 exploration panoramas
    - A_diagonal1: 28 exploration panoramas
    - A_diagonal2: 30 exploration panoramas
    - A_diagonal3: 29 exploration panoramas

    DATASET B
    ---------
    - 20 meters long
    - 21 learning panoramas (i.e. sets of 15 images captured while robot is stopped)
    - 4 exploration trajectories
    - B_on_learned: 29 exploration panoramas (i.e. sets of 7 images captured while robot is moving)
    - B_parallel: 29 exploration panoramas
    - B_diagonal1: 29 exploration panoramas
    - B_diagonal2: 29 exploration panoramas

    DATASET C
    ---------
    - 23.1 meters long
    - 25 learning panoramas (i.e. sets of 15 images captured while robot is stopped)
    - 2 exploration trajectories
    - C_on_learned: 34 exploration panoramas (i.e. sets of 7 images captured while robot is moving)
    - C_parallel: 34 exploration panoramas



    PANO_INFO FILE STRUCTURE
    ------------------------
    Every folder containing images also contains an info file, named either learn_pano_info.SAVE or explo_pano_info.SAVE. Each line corresponds to an image. The structures is the following:
    - column 1: id = image_id + 1
    - column 2: azimuth of the center of the image in degrees/360 (value in [0,1])
    - column 3: elevation of the center of the image. irrelevant in this database (equal to 0).
    - column 4: type of panorama: equal to 1 if learning and to 0 if exploration.
    - column 5: end of panorama: equal to 1 if it corresponds to the last image of a panorama.


    REFERENCES
    ----------
    The dataset was used in the paper: Belkaid, M., Cuperlier, N., and Gaussier, P. Combining local and global visual information in context-based neurorobotic navigation. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 4947-4954, doi: 10.1109/IJCNN.2016.7727851, 2016.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Bureau of Land Management (2024). BLM CO VRI Visual Distance Zone Polygons [Dataset]. https://catalog.data.gov/dataset/blm-co-vri-visual-distance-zone-polygons
Organization logo

BLM CO VRI Visual Distance Zone Polygons

Explore at:
Dataset updated
Nov 20, 2024
Dataset provided by
Bureau of Land Managementhttp://www.blm.gov/
Description

BLM's Visual Resource Management system provides a way to identify and evaluate scenic values to determine the appropriate levels of management. It also provides a way to analyze potential visual impacts and apply visual design techniques to ensure that surface-disturbing activities are in harmony with their surroundings. This is a two stage process: Inventory, Visual Resource Inventory (VRI) and Analysis, Visual Resource Contrast Rating.

Search
Clear search
Close search
Google apps
Main menu