BLM's Visual Resource Management system provides a way to identify and evaluate scenic values to determine the appropriate levels of management. It also provides a way to analyze potential visual impacts and apply visual design techniques to ensure that surface-disturbing activities are in harmony with their surroundings. This is a two stage process: Inventory, Visual Resource Inventory (VRI) and Analysis, Visual Resource Contrast Rating.
Description:
The Urban Visual Pollution Dataset is designed for the detection and evaluation of various visual pollutants present in urban environments. This dataset comprises street-level imagery captured by cameras mounted on moving vehicles, offering a comprehensive view of visual pollution in a specific urban area. As visual pollution becomes an increasingly recognized issue, this dataset provides a foundation for pioneering research and development in environmental management and urban planning.
Objective
The primary goal of this dataset is to support the development of automated systems for visual pollution classification. By leveraging convolutional neural networks (CNNs), researchers and developers can simulate human-like image recognition capabilities to identify and classify different types of visual pollutants. This work is essential for creating a "visual pollution score/index," a new metric that could become integral to urban environmental management. The dataset not only fosters innovation in Al and computer vision but also contributes to the broader understanding and mitigation of urban visual pollution.
Download Dataset
Visual Pollution Types
The dataset covers a wide range of visual pollution categories, reflecting the diverse challenges faced by urban environments. These include:
Graffiti: Unauthorized art or vandalism on public or private property.
Faded Signage: Deteriorating signs that contribute to a neglected appearance.
Potholes: Surface depressions in roadways that can cause vehicle damage and accidents.
Garbage: Litter and improperly disposed waste in public areas.
Construction Road: Temporary or abandoned construction sites that disrupt the urban landscape.
Broken Signage: Damaged signs that may pose safety hazards and detract from the urban environment.
Bad Streetlight: Faulty or insufficient street lighting that affects visibility and safety.
Bad Billboard: Deteriorated or poorly maintained billboards that contribute to visual clutter.
Sand on Road: Accumulations of sand or debris that can obscure road markings and pose driving hazards.
Cluttered Sidewalk: Overcrowded pedestrian pathways with obstacles such as street vendors, debris, or parked vehicles.
Unkept Facade: Building exteriors that are poorly maintained, contributing to a dilapidated urban appearance.
Dataset Composition
This dataset is composed of raw sensor camera inputs collected by a fleet of vehicles operating within a restricted geographic area in the Kingdom of Saudi Arabia (KSA). The imagery captures a wide array of urban scenes under different lighting and weather conditions, providing a robust dataset for training and testing machine learning models.
Applications and Use Cases
Automated Visual Pollution Detection: Training Al models to automatically identify and categorize visual pollutants in urban environments.
Urban Environmental Management: Developing tools to assess and mitigate visual pollution, leading to better urban planning and policy-making.
Public Awareness and Engagement: Creating platforms to raise awareness about visual pollution and encourage community-driven efforts to improve urban aesthetics.
Safety and Maintenance: Enhancing urban safety by identifying and addressing hazards like potholes, broken signage, and bad street lighting.
Potential Impact
The Urban Visual Pollution Dataset is poised to play a crucial role in shaping the future of urban environmental management. By enabling the development of sophisticated tools for detecting and evaluating visual pollution, this dataset supports efforts to create cleaner, safer, and more aesthetically pleasing urban spaces. The introduction of a visual pollution index could become a standard metric in urban planning, guiding interventions and policies to improve the quality of life in cities worldwide.
Future Directions
Future research could expand this dataset to include more geographic areas, different urban environments, and additional types of visual pollutants. There is also potential for integrating this dataset with other environmental data, such as air and noise pollution, to develop comprehensive urban health indices.
Conclusion
The Urban Visual Pollution Dataset is a critical resource for advancing the field of urban environmental management. By providing high-quality, diverse data, it empowers researchers and practitioners to address the growing challenge of visual pollution in cities, ultimately contributing to the development of more livable urban environments.
This dataset is sourced from Kaggle.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Long-range Pedestrian Dataset is curated for the visual entertainment sector, featuring a collection of outdoor-collected images with a high resolution of 3840 x 2160 pixels. This dataset is focused on long-distance pedestrian imagery, with each target pedestrian precisely labeled with a bounding box that closely fits the boundary of the pedestrian target, providing detailed data for scene composition and character placement in visual content.
This dataset has Visibility data from NOAA NOS Center for Operational Oceanographic Products and Services (CO-OPS). WARNING: These preliminary data have not been subjected to the National Ocean Services (NOS) Quality Control procedures, and do not necessarily meet the criteria and standards of official NOS data. They are released for limited public use with appropriate caution. WARNING: * Queries for data MUST include stationID= and time>=. * Queries USUALLY include time<= (the default end time corresponds to 'now'). * Queries MUST be for less than 30 days worth of data. * The data source isn't completely reliable. If your request returns no data when you think it should: * Try revising the request (e.g., a different time range). * The list of stations offering this data may be incorrect. * Sometimes a station or the entire data service is unavailable. Wait a while and try again.
The National Hydrography Dataset Plus (NHDplus) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US EPA Office of Water and the US Geological Survey, the NHDPlus provides mean annual and monthly flow estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses. For more information on the NHDPlus dataset see the NHDPlus v2 User Guide.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territories not including Alaska.Geographic Extent: The United States not including Alaska, Puerto Rico, Guam, US Virgin Islands, Marshall Islands, Northern Marianas Islands, Palau, Federated States of Micronesia, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: EPA and USGSUpdate Frequency: There is new new data since this 2019 version, so no updates planned in the futurePublication Date: March 13, 2019Prior to publication, the NHDPlus network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the NHDPlus Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, On or Off Network (flowlines only), Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original NHDPlus dataset. No data values -9999 and -9998 were converted to Null values for many of the flowline fields.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but a vector tile layer created from the same data can be used at smaller scales to produce a webmap that displays across the full range of scales. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute. Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map. Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
Geoprocessing service Esri ArcGIS Server - Visibility_DMR 5G is a public service intended for visibility analysis execution using the dataset Digital Terrain Model of the Czech Republic of the 5th generation (DMR 5G). Geoprocessing service enables to find out, which area is visible from chosen observer location to defined distance. When using the service is necessary to choose the observer location, pecify oberver offset above the terrain and define the distance, in which the visibility analysis is demanded. The result of the analysis is visibility field (area) represented by polygons, which delimit visible parts of the terrain.
The geoprocessing service is published as asynchronous. The result is passed on client throught Result Map Service Visibility_DMR 5G (MapService). The result can be downloaded from server and saved to a local disc as shapefile using URL, which is generated and sended by the geoprocessing service. URL for the result download throught a web client is published in running service record, that is sent from server to the client.
The National Ocean Service (NOS) maintains a long-term database containing data from active and historic stations installed all over the United States and U.S. territories. Since the 1990s, NOAA's Center for Operational Oceanographic Products and Services (CO-OPS) has been collecting various meteorological data along the U.S. coastline, around the Great Lakes and connecting channels, as well as in various U.S. territories. Stations are configured for a variety of observation periods, depending upon the location. Some of these sensors are located with water level stations, while others are independent, dedicated meteorological stations. These data are used to support a variety of purposes including but not limited to safe and efficient marine navigation and coastal hazards monitoring. The standard reporting time for collecting data are in 6-minute intervals.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Structured interviews were conducted with researchers from the Comprehensive Open Distance eLearning institutions to examine current data curation practices. The study aimed to identify strategies that improve the discoverability and accessibility of research data submitted in the research data repository. The visibility of research output is crucial for academic recognition and the advancement of knowledge, as well as for complying with funder requirements to make provisions for data reuse and enable actionable and socially beneficial open science from publicly funded research projects. The visibility of research output is crucial for academic recognition and the advancement of knowledge, as well as for complying with funder requirements to make provisions for data reuse and enable actionable and socially beneficial open science from publicly funded research projects.
Geoprocessing service Esri ArcGIS Server - Visibility_DMP 1G is a public service intended for visibility analysis execution using the dataset Digital Surface Model of the Czech Republic of the 1st generation (DMP 1G). Geoprocessing service enables to find out, which area is visible from chosen observer location to defined distance. When using the service is necessary to choose the observer location, specify oberver offset above the terrain and define the distance, in which the visibility analysis is demanded. The result of the analysis is visibility field (area) represented by polygons, which delimit visible parts of the terrain.
The geoprocessing service is published as asynchronous. The result is passed on client throught Result Map Service Visibility_DMP 1G (MapService). The result can be downloaded from server and saved to a local disc as shapefile using URL, which is generated and sended by the geoprocessing service. URL for the result download throught a web client is published in running service record, that is sent from server to the client.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Intended for web map display in Portal web maps, web applications, and use in ArcGIS Pro. Source of feature class is yavgis.MISSDEADM.Townships from the production enterprise database. Published in Central AZ State Plane Coordinate System. No definition queries. Visibility range is 1:2,000,000.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This multi-subject and multi-session EEG dataset for modelling human visual object recognition (MSS) contains:
More details about the dataset are described as follows.
32 participants were recruited from college students in Beijing, of which 4 were female, and 28 were male, with an age range of 21-33 years. 100 sessions were conducted. They were paid and gave written informed consent. The study was conducted under the approval of the ethical committee of the Institute of Automation at the Chinese Academy of Sciences, with the approval number: IA21-2410-020201.
After every 50 sequences, there was a break for the participants to rest. Each rapid serial sequence lasted approximately 7.5 seconds, starting with a 750ms blank screen with a white fixation cross, followed by 20 or 21 images presented at 5 Hz with a 50% duty cycle. The sequence ended with another 750ms blank screen.
After the rapid serial sequence, there was a 2-second interval during which participants were instructed to blink and then report whether a special image appeared in the sequence using a keyboard. During each run, 20 sequences were randomly inserted with additional special images at random positions. The special images are logos for brain-computer interfaces.
Each image was displayed for 1 second and was followed by 11 choice boxes (1 correct class box, 9 random class boxes, and 1 reject box). Participants were required to select the correct class of the displayed image using a mouse to increase their engagement. After the selection, a white fixation cross was displayed for 1 second in the centre of the screen to remind participants to pay attention to the upcoming task.
The stimuli are from two image databases, ImageNet and PASCAL. The final set consists of 10,000 images, with 500 images for each class.
In the derivatives/annotations folder, there are additional information of MSS:
The EEG signals were pre-processed using the MNE package, version 1.3.1, with Python 3.9.16. The data was sampled at a rate of 1,000 Hz with a bandpass filter applied between 0.1 and 100 Hz. A notch filter was used to remove 50 Hz power frequency. Epochs were created for each trial ranging from 0 to 500 ms relative to stimulus onset. No further preprocessing or artefact correction methods were applied in technical validation. However, researchers may want to consider widely used preprocessing steps such as baseline correction or eye movement correction. After the preprocessing, each session resulted in two matrices: RSVP EEG data matrix of shape (8,000 image conditions × 122 EEG channels × 125 EEG time points) and low-speed EEG data matrix of shape (400 image conditions × 122 EEG channels × 125 EEG time points).
The MERL-RAV (MERL Reannotation of AFLW with Visibility) Dataset contains over 19,000 face images in a full range of head poses. Each face is manually labeled with the ground-truth locations of 68 landmarks, with the additional information of whether each landmark is unoccluded, self-occluded (due to extreme head poses), or externally occluded. The images were annotated by professional labelers, supervised by researchers at Mitsubishi Electric Research Laboratories (MERL).
Geoprocessing service Esri ArcGIS Server - Visibility_DMR 4G is a public service intended for visibility analysis execution using the dataset Digital Terrain Model of the Czech Republic of the 4th generation (DMR 4G). Geoprocessing service enables to find out, which area is visible from chosen observer location to defined distance. When using the service is necessary to choose the observer location, specify oberver offset above the terrain and define the distance, in which the visibility analysis is demanded. The result of the analysis is visibility field (area) represented by polygons, which delimit visible parts of the terrain.
The geoprocessing service is published as asynchronous. The result is passed on client throught Result Map Service Visibility_DMR 4G (MapService). The result can be downloaded from server and saved to a local disc as shapefile using URL, which is generated and sended by the geoprocessing service. URL for the result download throught a web client is published in running service record, that is sent from server to the client.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Measurements of visibility, precipitation type and intensity, and the corresponding the World Meteorological Organization (WMO) weather codes. The dataset provides rare high quality meteorological observations from sea-ice regions of the Arctic Ocean. They enable analysis of meteorological conditions and provide context for other measurements and analysis associated with the expedition. The measurements are from the present weather sensor operating on Icebreaker Oden’s 7th deck at 25 m above sea level during the Arctic Ocean 2018 (AO2018, also referred to as MOCCHA-ACAS-ICE) expedition to the central Arctic Ocean in August and September 2018. Visibility, up to a maximum range of 20km, air temperature and precipitation type and intensity determined by the Vaisala PWD22 present weather sensor installed above Oden’s bridge on the 7th deck, at a height of 27m above sea level. The system was operated as part of the Stockholm University ACAS project. The system additionally reports instantaneous, 15-minute and 60-minute WMO present weather codes. Until 2018-08-13 20:10, the sensor was set to only report visibility. After this date, the full set of measurements were reported. Data from the system are combined into a cruise-length file. The data are time-averaged to both 1-minute and 30-minute intervals, to correspond with the micrometeorological averaging periods used for the mast sensors.
https://www.nlm.nih.gov/databases/download/terms_and_conditions.htmlhttps://www.nlm.nih.gov/databases/download/terms_and_conditions.html
This dataset corresponds to a collection of images and/or image-derived data available from National Cancer Institute Imaging Data Commons (IDC) [1]. This dataset was converted into DICOM representation and ingested by the IDC team. You can explore and visualize the corresponding images using IDC Portal here: NLM-Visible-Human-Project. You can use the manifests included in this Zenodo record to download the content of the collection following the Download instructions below.
The NLM Visible Human Project [2] has created publicly-available complete, anatomically detailed, three-dimensional representations of a human male body and a human female body. Specifically, the VHP provides a public-domain library of cross-sectional cryosection, CT, and MRI images obtained from one male cadaver and one female cadaver. The Visible Man data set was publicly released in 1994 and the Visible Woman in 1995.
The data sets were designed to serve as (1) a reference for the study of human anatomy, (2) public-domain data for testing medical imaging algorithms, and (3) a test bed and model for the construction of network-accessible image libraries. The VHP data sets have been applied to a wide range of educational, diagnostic, treatment planning, virtual reality, artistic, mathematical, and industrial uses. About 4,000 licensees from 66 countries were authorized to access the datasets. As of 2019, a license is no longer required to access the VHP datasets.
Courtesy of the U.S. National Library of Medicine. Release of this collection by IDC does not indicate or imply that NLM has endorsed its products/services/applications. Please see the Visible Human Project information page to learn more about the images and to obtain any supporting metadata for this collection. Note that this collection may not reflect the most current/accurate data available from NLM.
Citation guidelines can be found on the National Library of Medicine Terms and Conditions information page.
A manifest file's name indicates the IDC data release in which a version of collection data was first introduced. For example, collection_id-idc_v8-aws.s5cmd
corresponds to the contents of the collection_id
collection introduced in IDC data release v8. If there is a subsequent version of this Zenodo page, it will indicate when a subsequent version of the corresponding collection was introduced.
nlm_visible_human_project-idc_v15-aws.s5cmd
: manifest of files available for download from public IDC Amazon Web Services bucketsnlm_visible_human_project-idc_v15-gcs.s5cmd
: manifest of files available for download from public IDC Google Cloud Storage bucketsnlm_visible_human_project-idc_v15-dcf.dcf
: Gen3 manifest (for details see https://learn.canceridc.dev/data/organization-of-data/guids-and-uuids)Note that manifest files that end in -aws.s5cmd
reference files stored in Amazon Web Services (AWS) buckets, while -gcs.s5cmd
reference files in Google Cloud Storage. The actual files are identical and are mirrored between AWS and GCP.
Each of the manifests include instructions in the header on how to download the included files.
To download the files using .s5cmd
manifests:
pip install --upgrade idc-index
.s5cmd
manifest file: idc download manifest.s5cmd
.To download the files using .dcf
manifest, see manifest header.
Imaging Data Commons team has been funded in whole or in part with Federal funds from the National Cancer Institute, National Institutes of Health, under Task Order No. HHSN26110071 under Contract No. HHSN261201500003l.
[1] Fedorov, A., Longabaugh, W. J. R., Pot, D., Clunie, D. A., Pieper, S. D., Gibbs, D. L., Bridge, C., Herrmann, M. D., Homeyer, A., Lewis, R., Aerts, H. J. W., Krishnaswamy, D., Thiriveedhi, V. K., Ciausu, C., Schacherer, D. P., Bontempi, D., Pihl, T., Wagner, U., Farahani, K., Kim, E. & Kikinis, R. National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence. RadioGraphics (2023). https://doi.org/10.1148/rg.230180
[2] Spitzer, V., Ackerman, M. J., Scherzinger, A. L. & Whitlock, D. The visible human male: a technical report. J. Am. Med. Inform. Assoc. 3, 118–130 (1996). https://doi.org/10.1136/jamia.1996.96236280
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
not filled
Visibility viewsheds incorporate influences of distance from observer, object size and limits of human visual acuity to define the degree of visibility as a probability between 1 - 0. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to maximum and minimum visibility viewsheds. Average Visibility viewsheds can be used as a potential resource conflict screening tools as it relates to the Great Plains Wind Energy Programmatic Environmental Impact Statement. Data includes binary and composite viewsheds, and average, maximum, minimum, and composite visibility viewsheds for the NPS unit. Viewsheds have been derived using a 30m National Elevation Dataset (NED) digital elevation model. Additonal viewshed parameters: Observer Height (offset A) was set at 2 meters. A vertical development object height (offset B) was set at 110 meters, representing an average wind tower and associated blade height. A binary viewshed (1 visible, 0 not visible) was created for the defined NPS Unit specific Key Observation Points (KOP). A composite viewshed is the visibility of multiple viewsheds combined into one. A visible value in a composite viewshed implies that across all the combined binary viewsheds (one per key observation pointacross the nps unit in this case), at a minimum at least one of the sample points is visible. On a cell by cell basis throughout the study area of interest the numbers of visible sample points are recorded in the composite viewshed. Composite viewsheds are a quick way to synthesize multiple viewsheds into one layer, thus giving an efficient and cursory overview of potential visual resource effects. To summarize visibility viewsheds across numerous viewsheds, (e.g. multiple viewsheds per high priority segment) three visibility scenario summary viewsheds have been derived: 1) A maximum visibility scenario is evaluated using a "Products" visibility viewshed, which represents the probability that all sample points are visible. Maximum visibility viewsheds are derived by multiplying probability values per visibility viewshed. 2) A minimum visibility scenario is assessed using a "Fuzzy sum" visibility viewshed. Minimum visibility viewsheds represent the probability that one sample point is visible, and is derived by calculating the fuzzy sum value across the probability values per visibility viewsheds. 3) Lastly an average visibility scenario is created from an "Average" visibility calculation. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to the aforementioned maximum and minimum visibility viewsheds. Equations for the maximum, average and minimum visibility viewsheds are defined below: Maximum Visibility: Products Visibility =(p1*p2*pn...), Average Visibility: Average Visibility =((p1*p2*pn)/n), and Minimum Visibility: Fuzzy Sum Visibility =(1-((1-p1 )*(1-p2 )*(1-pn )* ...). Moving beyond a simplistic binary viewshed approach, visibility viewsheds define the degree of visibility as a probability between 1 - 0. Visibility viewsheds incorporate the influences of distance from observer, object size (solar energy towers, troughs, panels, etc.) and limits of human visual acuity to derive a fuzzy membership value. A fuzzy membership value is a probability of visibility ranging between 1 - 0, where a value of one implies that the object would be easily visible under most conditions and for most viewers, while a lower value represents reduced visibility. Visibility viewshed calculation is performed using the modified fuzzy viewshed equations (Ogburn D.E. 2006). Visibility viewsheds have been defined using: a foreground distance (b1) of 1 km, a visual arc threshold value of 1 minute (limit of 20/20 vision) which is used in the object width multiplier calculation, and an object width value of 10 meters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains raw visual images, visualized tactile images along the X- and Z-axes and an Excel file that organize every sample and their correspondence in order. The tactile images are interpolated on the raw haptic signal to align with the visual images. Both the visual and tactile images have identical resolution of 620 X 410. The dataset consists of 743 records. Each record includes one visual image, two tactile images along the X and Z axes, and one defect segmentation image. Tactile image filenames ending with x and z denote X and Z components respectively.The samples in the dataset exhibit a wide range of colors and textures. Moreover, the dataset demonstrates the advantage of cross-modal data fusion. As a flexible material, leather may have defects on its surface and underside, which can be observed in the visual and tactile images, respectively. Combining visual and tactile images provides better information on the distribution of defects
Visibility viewsheds incorporate influences of distance from observer, object size and limits of human visual acuity to define the degree of visibility as a probability between 1 - 0. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to maximum and minimum visibility viewsheds. Average Visibility viewsheds can be used as a potential resource conflict screening tools as it relates to the Great Plains Wind Energy Programmatic Environmental Impact Statement. Data includes binary and composite viewsheds, and average, maximum, minimum, and composite visibility viewsheds for the NPS unit. Viewsheds have been derived using a 30m National Elevation Dataset (NED) digital elevation model. Additonal viewshed parameters: Observer Height (offset A) was set at 2 meters. A vertical development object height (offset B) was set at 110 meters, representing an average wind tower and associated blade height. A binary viewshed (1 visible, 0 not visible) was created for the defined NPS Unit specific Key Observation Points (KOP). A composite viewshed is the visibility of multiple viewsheds combined into one. A visible value in a composite viewshed implies that across all the combined binary viewsheds (one per key observation pointacross the nps unit in this case), at a minimum at least one of the sample points is visible. On a cell by cell basis throughout the study area of interest the numbers of visible sample points are recorded in the composite viewshed. Composite viewsheds are a quick way to synthesize multiple viewsheds into one layer, thus giving an efficient and cursory overview of potential visual resource effects. To summarize visibility viewsheds across numerous viewsheds, (e.g. multiple viewsheds per high priority segment) three visibility scenario summary viewsheds have been derived: 1) A maximum visibility scenario is evaluated using a "Products" visibility viewshed, which represents the probability that all sample points are visible. Maximum visibility viewsheds are derived by multiplying probability values per visibility viewshed. 2) A minimum visibility scenario is assessed using a "Fuzzy sum" visibility viewshed. Minimum visibility viewsheds represent the probability that one sample point is visible, and is derived by calculating the fuzzy sum value across the probability values per visibility viewsheds. 3) Lastly an average visibility scenario is created from an "Average" visibility calculation. Average visibility viewsheds represent the average visibility value across all visibility viewsheds, thus representing a middle scenario relative to the aforementioned maximum and minimum visibility viewsheds. Equations for the maximum, average and minimum visibility viewsheds are defined below: Maximum Visibility: Products Visibility =(p1*p2*pn...), Average Visibility: Average Visibility =((p1*p2*pn)/n), and Minimum Visibility: Fuzzy Sum Visibility =(1-((1-p1 )*(1-p2 )*(1-pn )* ...). Moving beyond a simplistic binary viewshed approach, visibility viewsheds define the degree of visibility as a probability between 1 - 0. Visibility viewsheds incorporate the influences of distance from observer, object size (solar energy towers, troughs, panels, etc.) and limits of human visual acuity to derive a fuzzy membership value. A fuzzy membership value is a probability of visibility ranging between 1 - 0, where a value of one implies that the object would be easily visible under most conditions and for most viewers, while a lower value represents reduced visibility. Visibility viewshed calculation is performed using the modified fuzzy viewshed equations (Ogburn D.E. 2006). Visibility viewsheds have been defined using: a foreground distance (b1) of 1 km, a visual arc threshold value of 1 minute (limit of 20/20 vision) which is used in the object width multiplier calculation, and an object width value of 10 meters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The images were captured by a fisheye camera and a magnetic compass was used to acquire the orientation data. The datasets are split in two folders:
1) LEARN: In order to learn a new place, the robot camera captures 15 images over a 360 degrees panorama. During this process, the robot stays still in order to avoid distortions in the representation of the place.
2) EXPLO: When exploring the environment (i.e. the rest of the time), the robot only captures 7 images per panorama, for the purpose of faster place recognition. Images are captured while the robot is moving. Various exploration panoramas are recorded around the trajectory performed in the learning panoramas (see traj.pdf).
The average distance between two learning panoramas is 0.93 +/- 0.03 meters
The average distance traveled during an exploration panoramas is 0.71 +/- 0.01 meters
DATASET A
---------
- 20 meters long
- 22 learning panoramas (i.e. sets of 15 images captured while robot is stopped)
- 5 exploration trajectories
- A_on_learned: 29 exploration panoramas (i.e. sets of 7 images captured while robot is moving)
- A_parallel: 29 exploration panoramas
- A_diagonal1: 28 exploration panoramas
- A_diagonal2: 30 exploration panoramas
- A_diagonal3: 29 exploration panoramas
DATASET B
---------
- 20 meters long
- 21 learning panoramas (i.e. sets of 15 images captured while robot is stopped)
- 4 exploration trajectories
- B_on_learned: 29 exploration panoramas (i.e. sets of 7 images captured while robot is moving)
- B_parallel: 29 exploration panoramas
- B_diagonal1: 29 exploration panoramas
- B_diagonal2: 29 exploration panoramas
DATASET C
---------
- 23.1 meters long
- 25 learning panoramas (i.e. sets of 15 images captured while robot is stopped)
- 2 exploration trajectories
- C_on_learned: 34 exploration panoramas (i.e. sets of 7 images captured while robot is moving)
- C_parallel: 34 exploration panoramas
PANO_INFO FILE STRUCTURE
------------------------
Every folder containing images also contains an info file, named either learn_pano_info.SAVE or explo_pano_info.SAVE. Each line corresponds to an image. The structures is the following:
- column 1: id = image_id + 1
- column 2: azimuth of the center of the image in degrees/360 (value in [0,1])
- column 3: elevation of the center of the image. irrelevant in this database (equal to 0).
- column 4: type of panorama: equal to 1 if learning and to 0 if exploration.
- column 5: end of panorama: equal to 1 if it corresponds to the last image of a panorama.
REFERENCES
----------
The dataset was used in the paper: Belkaid, M., Cuperlier, N., and Gaussier, P. Combining local and global visual information in context-based neurorobotic navigation. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 4947-4954, doi: 10.1109/IJCNN.2016.7727851, 2016.
BLM's Visual Resource Management system provides a way to identify and evaluate scenic values to determine the appropriate levels of management. It also provides a way to analyze potential visual impacts and apply visual design techniques to ensure that surface-disturbing activities are in harmony with their surroundings. This is a two stage process: Inventory, Visual Resource Inventory (VRI) and Analysis, Visual Resource Contrast Rating.