100+ datasets found
  1. d

    Converting analog interpretive data to digital formats for use in database...

    • datadiscoverystudio.org
    Updated Jun 6, 2008
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2008). Converting analog interpretive data to digital formats for use in database and GIS applications [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/ed9bb80881c64dc38dfc614d7d454022/html
    Explore at:
    Dataset updated
    Jun 6, 2008
    Description

    Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information

  2. Geodatabase for the Baltimore Ecosystem Study Spatial Data

    • search.dataone.org
    • portal.edirepository.org
    Updated Apr 1, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Spatial Analysis Lab; Jarlath O'Neal-Dunne; Morgan Grove (2020). Geodatabase for the Baltimore Ecosystem Study Spatial Data [Dataset]. https://search.dataone.org/view/https%3A%2F%2Fpasta.lternet.edu%2Fpackage%2Fmetadata%2Feml%2Fknb-lter-bes%2F3120%2F150
    Explore at:
    Dataset updated
    Apr 1, 2020
    Dataset provided by
    Long Term Ecological Research Networkhttp://www.lternet.edu/
    Authors
    Spatial Analysis Lab; Jarlath O'Neal-Dunne; Morgan Grove
    Time period covered
    Jan 1, 1999 - Jun 1, 2014
    Area covered
    Description

    The establishment of a BES Multi-User Geodatabase (BES-MUG) allows for the storage, management, and distribution of geospatial data associated with the Baltimore Ecosystem Study. At present, BES data is distributed over the internet via the BES website. While having geospatial data available for download is a vast improvement over having the data housed at individual research institutions, it still suffers from some limitations. BES-MUG overcomes these limitations; improving the quality of the geospatial data available to BES researches, thereby leading to more informed decision-making. BES-MUG builds on Environmental Systems Research Institute's (ESRI) ArcGIS and ArcSDE technology. ESRI was selected because its geospatial software offers robust capabilities. ArcGIS is implemented agency-wide within the USDA and is the predominant geospatial software package used by collaborating institutions. Commercially available enterprise database packages (DB2, Oracle, SQL) provide an efficient means to store, manage, and share large datasets. However, standard database capabilities are limited with respect to geographic datasets because they lack the ability to deal with complex spatial relationships. By using ESRI's ArcSDE (Spatial Database Engine) in conjunction with database software, geospatial data can be handled much more effectively through the implementation of the Geodatabase model. Through ArcSDE and the Geodatabase model the database's capabilities are expanded, allowing for multiuser editing, intelligent feature types, and the establishment of rules and relationships. ArcSDE also allows users to connect to the database using ArcGIS software without being burdened by the intricacies of the database itself. For an example of how BES-MUG will help improve the quality and timeless of BES geospatial data consider a census block group layer that is in need of updating. Rather than the researcher downloading the dataset, editing it, and resubmitting to through ORS, access rules will allow the authorized user to edit the dataset over the network. Established rules will ensure that the attribute and topological integrity is maintained, so that key fields are not left blank and that the block group boundaries stay within tract boundaries. Metadata will automatically be updated showing who edited the dataset and when they did in the event any questions arise. Currently, a functioning prototype Multi-User Database has been developed for BES at the University of Vermont Spatial Analysis Lab, using Arc SDE and IBM's DB2 Enterprise Database as a back end architecture. This database, which is currently only accessible to those on the UVM campus network, will shortly be migrated to a Linux server where it will be accessible for database connections over the Internet. Passwords can then be handed out to all interested researchers on the project, who will be able to make a database connection through the Geographic Information Systems software interface on their desktop computer. This database will include a very large number of thematic layers. Those layers are currently divided into biophysical, socio-economic and imagery categories. Biophysical includes data on topography, soils, forest cover, habitat areas, hydrology and toxics. Socio-economics includes political and administrative boundaries, transportation and infrastructure networks, property data, census data, household survey data, parks, protected areas, land use/land cover, zoning, public health and historic land use change. Imagery includes a variety of aerial and satellite imagery. See the readme: http://96.56.36.108/geodatabase_SAL/readme.txt See the file listing: http://96.56.36.108/geodatabase_SAL/diroutput.txt

  3. d

    Land-Use Conflict Identification Strategy (LUCIS) Models

    • catalog.data.gov
    • cloud.csiss.gmu.edu
    • +3more
    Updated Nov 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Univeristy of Idaho (2020). Land-Use Conflict Identification Strategy (LUCIS) Models [Dataset]. https://catalog.data.gov/dataset/land-use-conflict-identification-strategy-lucis-models
    Explore at:
    Dataset updated
    Nov 30, 2020
    Dataset provided by
    Univeristy of Idaho
    Description

    The downloadable ZIP file contains model documentation and contact information for the model creator. For more information, or a copy of the project report which provides greater model detail, please contact Ryan Urie - traigo12@gmail.com.This model was created from February through April 2010 as a central component of the developer's master's project in Bioregional Planning and Community Design at the University of Idaho to provide a tool for identifying appropriate locations for various land uses based on a variety of user-defined social, economic, ecological, and other criteria. It was developed using the Land-Use Conflict Identification Strategy developed by Carr and Zwick (2007). The purpose of this model is to allow users to identify suitable locations within a user-defined extent for any land use based on any number of social, economic, ecological, or other criteria the user chooses. The model as it is currently composed was designed to identify highly suitable locations for new residential, commercial, and industrial development in Kootenai County, Idaho using criteria, evaluations, and weightings chosen by the model's developer. After criteria were chosen, one or more data layers were gathered for each criterion from public sources. These layers were processed to result in a 60m-resolution raster showing the suitability of each criterion across the county. These criteria were ultimately combined with a weighting sum to result in an overall development suitability raster. The model is intended to serve only as an example of how a GIS-based land-use suitability analysis can be conceptualized and implemented using ArcGIS ModelBuilder, and under no circumstances should the model's outputs be applied to real-world decisions or activities. The model was designed to be extremely flexible so that later users may determine their own land-use suitability, suitability criteria, evaluation rationale, and criteria weights. As this was the first project of its kind completed by the model developer, no guarantees are made as to the quality of the model or the absence of errorsThis model has a hierarchical structure in which some forty individual land-use suitability criteria are combined by weighted summation into several land-use goals which are again combined by weighted summation to yield a final land-use suitability layer. As such, any inconsistencies or errors anywhere in the model tend to reveal themselves in the final output and the model is in a sense self-testing. For example, each individual criterion is presented as a raster with values from 1-9 in a defined spatial extent. Inconsistencies at any point in the model will reveal themselves in the final output in the form of an extent different from that desired, missing values, or values outside the 1-9 range.This model was created using the ArcGIS ModelBuilder function of ArcGIS 9.3. It was based heavily on the recommendations found in the text "Smart land-use analysis: the LUCIS model." The goal of the model is to determine the suitability of a chosen land-use at each point across a chosen area using the raster data format. In this case, the suitability for Development was evaluated across the area of Kootenai County, Idaho, though this is primarily for illustrative purposes. The basic process captured by the model is as follows: 1. Choose a land use suitability goal. 2. Select the goals and criteria that define this goal and get spatial data for each. 3. Use the gathered data to evaluate the quality of each criterion across the landscape, resulting in a raster with values from 1-9. 4. Apply weights to each criterion to indicate its relative contribution to the suitability goal. 5. Combine the weighted criteria to calculate and display the suitability of this land use at each point across the landscape. An individual model was first built for each of some forty individual criteria. Once these functioned successfully, individual criteria were combined with a weighted summation to yield one of three land-use goals (in this case, Residential, Commercial, or Industrial). A final model was then constructed to combined these three goals into a final suitability output. In addition, two conditional elements were placed on this final output (one to give already-developed areas a very high suitability score for development [a "9"] and a second to give permanently conserved areas and other undevelopable lands a very low suitability score for development [a "1"]). Because this model was meant to serve primarily as an illustration of how to do land-use suitability analysis, the criteria, evaluation rationales, and weightings were chosen by the modeler for expediency; however, a land-use analysis meant to guide real-world actions and decisions would need to rely far more heavily on a variety of scientific and stakeholder input.

  4. A

    Pattern-based GIS for understanding content of very large Earth Science...

    • data.amerigeoss.org
    • data.wu.ac.at
    html
    Updated Jan 29, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2020). Pattern-based GIS for understanding content of very large Earth Science datasets [Dataset]. https://data.amerigeoss.org/dataset/pattern-based-gis-for-understanding-content-of-very-large-earth-science-datasets1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jan 29, 2020
    Dataset provided by
    United States
    Area covered
    Earth
    Description

    The research focus in the field of remotely sensed imagery has shifted from collection and warehousing of data ' tasks for which a mature technology already exists, to auto-extraction of information and knowledge discovery from this valuable resource ' tasks for which technology is still under active development. In particular, intelligent algorithms for analysis of very large rasters, either high resolutions images or medium resolution global datasets, that are becoming more and more prevalent, are lacking. We propose to develop the Geospatial Pattern Analysis Toolbox (GeoPAT) a computationally efficient, scalable, and robust suite of algorithms that supports GIS processes such as segmentation, unsupervised/supervised classification of segments, query and retrieval, and change detection in giga-pixel and larger rasters. At the core of the technology that underpins GeoPAT is the novel concept of pattern-based image analysis. Unlike pixel-based or object-based (OBIA) image analysis, GeoPAT partitions an image into overlapping square scenes containing 1,000'100,000 pixels and performs further processing on those scenes using pattern signature and pattern similarity ' concepts first developed in the field of Content-Based Image Retrieval. This fusion of methods from two different areas of research results in orders of magnitude performance boost in application to very large images without sacrificing quality of the output.

    GeoPAT v.1.0 already exists as the GRASS GIS add-on that has been developed and tested on medium resolution continental-scale datasets including the National Land Cover Dataset and the National Elevation Dataset. Proposed project will develop GeoPAT v.2.0 ' much improved and extended version of the present software. We estimate an overall entry TRL for GeoPAT v.1.0 to be 3-4 and the planned exit TRL for GeoPAT v.2.0 to be 5-6. Moreover, several new important functionalities will be added. Proposed improvements includes conversion of GeoPAT from being the GRASS add-on to stand-alone software capable of being integrated with other systems, full implementation of web-based interface, writing new modules to extent it applicability to high resolution images/rasters and medium resolution climate data, extension to spatio-temporal domain, enabling hierarchical search and segmentation, development of improved pattern signature and their similarity measures, parallelization of the code, implementation of divide and conquer strategy to speed up selected modules.

    The proposed technology will contribute to a wide range of Earth Science investigations and missions through enabling extraction of information from diverse types of very large datasets. Analyzing the entire dataset without the need of sub-dividing it due to software limitations offers important advantage of uniformity and consistency. We propose to demonstrate the utilization of GeoPAT technology on two specific applications. The first application is a web-based, real time, visual search engine for local physiography utilizing query-by-example on the entire, global-extent SRTM 90 m resolution dataset. User selects region where process of interest is known to occur and the search engine identifies other areas around the world with similar physiographic character and thus potential for similar process. The second application is monitoring urban areas in their entirety at the high resolution including mapping of impervious surface and identifying settlements for improved disaggregation of census data.

  5. f

    Travel time to cities and ports in the year 2015

    • figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Nelson (2023). Travel time to cities and ports in the year 2015 [Dataset]. http://doi.org/10.6084/m9.figshare.7638134.v4
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Andy Nelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5

    If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD

    The following text is a summary of the information in the above Data Descriptor.

    The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.

    The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.

    These maps represent a unique global representation of physical access to essential services offered by cities and ports.

    The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).

    travel_time_to_ports_x (x ranges from 1 to 5)

    The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.

    Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes

    Data type Byte (16 bit Unsigned Integer)

    No data value 65535

    Flags None

    Spatial resolution 30 arc seconds

    Spatial extent

    Upper left -180, 85

    Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)

    Temporal resolution 2015

    Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.

    Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.

    The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.

    Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points

    The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).

    Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.

    Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.

    This process and results are included in the validation zip file.

    Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.

    The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.

    The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.

    The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.

  6. National Hydrography Dataset Plus High Resolution

    • hub.arcgis.com
    Updated Mar 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). National Hydrography Dataset Plus High Resolution [Dataset]. https://hub.arcgis.com/maps/f1f45a3ba37a4f03a5f48d7454e4b654
    Explore at:
    Dataset updated
    Mar 16, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    The National Hydrography Dataset Plus High Resolution (NHDplus High Resolution) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US Geological Survey, NHDPlus High Resolution provides mean annual flow and velocity estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses.For more information on the NHDPlus High Resolution dataset see the User’s Guide for the National Hydrography Dataset Plus (NHDPlus) High Resolution.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territoriesGeographic Extent: The Contiguous United States, Hawaii, portions of Alaska, Puerto Rico, Guam, US Virgin Islands, Northern Marianas Islands, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: USGSUpdate Frequency: AnnualPublication Date: July 2022This layer was symbolized in the ArcGIS Map Viewer and while the features will draw in the Classic Map Viewer the advanced symbology will not. Prior to publication, the network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original dataset. No data values -9999 and -9998 were converted to Null values.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute.Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map.Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class.Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.

  7. Geospatial data for the Vegetation Mapping Inventory Project of Little River...

    • catalog.data.gov
    Updated Jun 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2024). Geospatial data for the Vegetation Mapping Inventory Project of Little River Canyon National Preserve [Dataset]. https://catalog.data.gov/dataset/geospatial-data-for-the-vegetation-mapping-inventory-project-of-little-river-canyon-nation
    Explore at:
    Dataset updated
    Jun 5, 2024
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    Little River Canyon
    Description

    The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. Using the National Vegetation Classification System (NVCS) developed by Natureserve, with additional classes and modifiers, overstory vegetation communities for each park were interpreted from stereo color infrared aerial photographs using manual interpretation methods. Using a minimum mapping unit of 0.5 hectares (MMU = 0.5 ha), polygons representing areas of relatively uniform vegetation were delineated and annotated on clear plastic overlays registered to the aerial photographs. Polygons were labeled according to the dominant vegetation community. Where the polygons were not uniform, second and third vegetation classes were added. Further, a number of modifier codes were employed to indicate important aspects of the polygon that could be interpreted from the photograph (for example, burn condition). The polygons on the plastic overlays were then corrected using photogrammetric procedures and converted to vector format for use in creating a geographic information system (GIS) database for each park. In addition, high resolution color orthophotographs were created from the original aerial photographs for use in the GIS. Upon completion of the GIS database (including vegetation, orthophotos and updated roads and hydrology layers), both hardcopy and softcopy maps were produced for delivery. Metadata for each database includes a description of the vegetation classification system used for each park, summary statistics and documentation of the sources, procedures and spatial accuracies of the data. At the time of this writing, an accuracy assessment of the vegetation mapping has not been performed for most of these parks.

  8. B

    Residential Schools Locations Dataset (Geodatabase)

    • borealisdata.ca
    • search.dataone.org
    Updated May 31, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rosa Orlandini (2019). Residential Schools Locations Dataset (Geodatabase) [Dataset]. http://doi.org/10.5683/SP2/JFQ1SZ
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 31, 2019
    Dataset provided by
    Borealis
    Authors
    Rosa Orlandini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1863 - Jun 30, 1998
    Area covered
    Canada
    Description

    The Residential Schools Locations Dataset in Geodatabase format (IRS_Locations.gbd) contains a feature layer "IRS_Locations" that contains the locations (latitude and longitude) of Residential Schools and student hostels operated by the federal government in Canada. All the residential schools and hostels that are listed in the Residential Schools Settlement Agreement are included in this dataset, as well as several Industrial schools and residential schools that were not part of the IRRSA. This version of the dataset doesn’t include the five schools under the Newfoundland and Labrador Residential Schools Settlement Agreement. The original school location data was created by the Truth and Reconciliation Commission, and was provided to the researcher (Rosa Orlandini) by the National Centre for Truth and Reconciliation in April 2017. The dataset was created by Rosa Orlandini, and builds upon and enhances the previous work of the Truth and Reconcilation Commission, Morgan Hite (creator of the Atlas of Indian Residential Schools in Canada that was produced for the Tk'emlups First Nation and Justice for Day Scholar's Initiative, and Stephanie Pyne (project lead for the Residential Schools Interactive Map). Each individual school location in this dataset is attributed either to RSIM, Morgan Hite, NCTR or Rosa Orlandini. Many schools/hostels had several locations throughout the history of the institution. If the school/hostel moved from its’ original location to another property, then the school is considered to have two unique locations in this dataset,the original location and the new location. For example, Lejac Indian Residential School had two locations while it was operating, Stuart Lake and Fraser Lake. If a new school building was constructed on the same property as the original school building, it isn't considered to be a new location, as is the case of Girouard Indian Residential School.When the precise location is known, the coordinates of the main building are provided, and when the precise location of the building isn’t known, an approximate location is provided. For each residential school institution location, the following information is provided: official names, alternative name, dates of operation, religious affiliation, latitude and longitude coordinates, community location, Indigenous community name, contributor (of the location coordinates), school/institution photo (when available), location point precision, type of school (hostel or residential school) and list of references used to determine the location of the main buildings or sites. Access Instructions: there are 47 files in this data package. Please download the entire data package by selecting all the 47 files and click on download. Two files will be downloaded, IRS_Locations.gbd.zip and IRS_LocFields.csv. Uncompress the IRS_Locations.gbd.zip. Use QGIS, ArcGIS Pro, and ArcMap to open the feature layer IRS_Locations that is contained within the IRS_Locations.gbd data package. The feature layer is in WGS 1984 coordinate system. There is also detailed file level metadata included in this feature layer file. The IRS_locations.csv provides the full description of the fields and codes used in this dataset.

  9. Z

    Selkie GIS Techno-Economic Tool input datasets

    • data.niaid.nih.gov
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cullinane, Margaret (2023). Selkie GIS Techno-Economic Tool input datasets [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10083960
    Explore at:
    Dataset updated
    Nov 8, 2023
    Dataset authored and provided by
    Cullinane, Margaret
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data was prepared as input for the Selkie GIS-TE tool. This GIS tool aids site selection, logistics optimization and financial analysis of wave or tidal farms in the Irish and Welsh maritime areas. Read more here: https://www.selkie-project.eu/selkie-tools-gis-technoeconomic-model/

    This research was funded by the Science Foundation Ireland (SFI) through MaREI, the SFI Research Centre for Energy, Climate and the Marine and by the Sustainable Energy Authority of Ireland (SEAI). Support was also received from the European Union's European Regional Development Fund through the Ireland Wales Cooperation Programme as part of the Selkie project.

    File Formats

    Results are presented in three file formats:

    tif Can be imported into a GIS software (such as ARC GIS) csv Human-readable text format, which can also be opened in Excel png Image files that can be viewed in standard desktop software and give a spatial view of results

    Input Data

    All calculations use open-source data from the Copernicus store and the open-source software Python. The Python xarray library is used to read the data.

    Hourly Data from 2000 to 2019

    • Wind - Copernicus ERA5 dataset 17 by 27.5 km grid
      10m wind speed

    • Wave - Copernicus Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis dataset 3 by 5 km grid

    Accessibility

    The maximum limits for Hs and wind speed are applied when mapping the accessibility of a site.
    The Accessibility layer shows the percentage of time the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5) are below these limits for the month.

    Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined by checking if
    the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total number of hours for the month.

    Environmental data is from the Copernicus data store (https://cds.climate.copernicus.eu/). Wave hourly data is from the 'Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis' dataset.
    Wind hourly data is from the ERA 5 dataset.

    Availability

    A device's availability to produce electricity depends on the device's reliability and the time to repair any failures. The repair time depends on weather
    windows and other logistical factors (for example, the availability of repair vessels and personnel.). A 2013 study by O'Connor et al. determined the
    relationship between the accessibility and availability of a wave energy device. The resulting graph (see Fig. 1 of their paper) shows the correlation between accessibility at Hs of 2m and wind speed of 15.0m/s and availability. This graph is used to calculate the availability layer from the accessibility layer.

    The input value, accessibility, measures how accessible a site is for installation or operation and maintenance activities. It is the percentage time the
    environmental conditions, i.e. the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5), are below operational limits.
    Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined
    by checking if the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total
    number of hours for the month. Once the accessibility was known, the percentage availability was calculated using the O'Connor et al. graph of the relationship between the two. A mature technology reliability was assumed.

    Weather Window

    The weather window availability is the percentage of possible x-duration windows where weather conditions (Hs, wind speed) are below maximum limits for the
    given duration for the month.

    The resolution of the wave dataset (0.05° × 0.05°) is higher than that of the wind dataset
    (0.25° x 0.25°), so the nearest wind value is used for each wave data point. The weather window layer is at the resolution of the wave layer.

    The first step in calculating the weather window for a particular set of inputs (Hs, wind speed and duration) is to calculate the accessibility at each timestep.
    The accessibility is based on a simple boolean evaluation: are the wave and wind conditions within the required limits at the given timestep?

    Once the time series of accessibility is calculated, the next step is to look for periods of sustained favourable environmental conditions, i.e. the weather
    windows. Here all possible operating periods with a duration matching the required weather-window value are assessed to see if the weather conditions remain
    suitable for the entire period. The percentage availability of the weather window is calculated based on the percentage of x-duration windows with suitable
    weather conditions for their entire duration.The weather window availability can be considered as the probability of having the required weather window available
    at any given point in the month.

    Extreme Wind and Wave

    The Extreme wave layers show the highest significant wave height expected to occur during the given return period. The Extreme wind layers show the highest wind speed expected to occur during the given return period.

    To predict extreme values, we use Extreme Value Analysis (EVA). EVA focuses on the extreme part of the data and seeks to determine a model to fit this reduced
    portion accurately. EVA consists of three main stages. The first stage is the selection of extreme values from a time series. The next step is to fit a model
    that best approximates the selected extremes by determining the shape parameters for a suitable probability distribution. The model then predicts extreme values
    for the selected return period. All calculations use the python pyextremes library. Two methods are used - Block Maxima and Peaks over threshold.

    The Block Maxima methods selects the annual maxima and fits a GEVD probability distribution.

    The peaks_over_threshold method has two variable calculation parameters. The first is the percentile above which values must be to be selected as extreme (0.9 or 0.998). The second input is the time difference between extreme values for them to be considered independent (3 days). A Generalised Pareto Distribution is fitted to the selected
    extremes and used to calculate the extreme value for the selected return period.

  10. n

    InterAgencyFirePerimeterHistory All Years View - Dataset - CKAN

    • nationaldataplatform.org
    Updated Feb 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). InterAgencyFirePerimeterHistory All Years View - Dataset - CKAN [Dataset]. https://nationaldataplatform.org/catalog/dataset/interagencyfireperimeterhistory-all-years-view
    Explore at:
    Dataset updated
    Feb 28, 2024
    Description

    Historical FiresLast updated on 06/17/2022OverviewThe national fire history perimeter data layer of conglomerated Agency Authoratative perimeters was developed in support of the WFDSS application and wildfire decision support for the 2021 fire season. The layer encompasses the final fire perimeter datasets of the USDA Forest Service, US Department of Interior Bureau of Land Management, Bureau of Indian Affairs, Fish and Wildlife Service, and National Park Service, the Alaska Interagency Fire Center, CalFire, and WFIGS History. Perimeters are included thru the 2021 fire season. Requirements for fire perimeter inclusion, such as minimum acreage requirements, are set by the contributing agencies. WFIGS, NPS and CALFIRE data now include Prescribed Burns. Data InputSeveral data sources were used in the development of this layer:Alaska fire history USDA FS Regional Fire History Data BLM Fire Planning and Fuels National Park Service - Includes Prescribed Burns Fish and Wildlife ServiceBureau of Indian AffairsCalFire FRAS - Includes Prescribed BurnsWFIGS - BLM & BIA and other S&LData LimitationsFire perimeter data are often collected at the local level, and fire management agencies have differing guidelines for submitting fire perimeter data. Often data are collected by agencies only once annually. If you do not see your fire perimeters in this layer, they were not present in the sources used to create the layer at the time the data were submitted. A companion service for perimeters entered into the WFDSS application is also available, if a perimeter is found in the WFDSS service that is missing in this Agency Authoratative service or a perimeter is missing in both services, please contact the appropriate agency Fire GIS Contact listed in the table below.AttributesThis dataset implements the NWCG Wildland Fire Perimeters (polygon) data standard.https://www.nwcg.gov/sites/default/files/stds/WildlandFirePerimeters_definition.pdfIRWINID - Primary key for linking to the IRWIN Incident dataset. The origin of this GUID is the wildland fire locations point data layer. (This unique identifier may NOT replace the GeometryID core attribute)INCIDENT - The name assigned to an incident; assigned by responsible land management unit. (IRWIN required). Officially recorded name.FIRE_YEAR (Alias) - Calendar year in which the fire started. Example: 2013. Value is of type integer (FIRE_YEAR_INT).AGENCY - Agency assigned for this fire - should be based on jurisdiction at origin.SOURCE - System/agency source of record from which the perimeter came.DATE_CUR - The last edit, update, or other valid date of this GIS Record. Example: mm/dd/yyyy.MAP_METHOD - Controlled vocabulary to define how the geospatial feature was derived. Map method may help define data quality.GPS-Driven; GPS-Flight; GPS-Walked; GPS-Walked/Driven; GPS-Unknown Travel Method; Hand Sketch; Digitized-Image; Digitized-Topo; Digitized-Other; Image Interpretation; Infrared Image; Modeled; Mixed Methods; Remote Sensing Derived; Survey/GCDB/Cadastral; Vector; OtherGIS_ACRES - GIS calculated acres within the fire perimeter. Not adjusted for unburned areas within the fire perimeter. Total should include 1 decimal place. (ArcGIS: Precision=10; Scale=1). Example: 23.9UNQE_FIRE_ - Unique fire identifier is the Year-Unit Identifier-Local Incident Identifier (yyyy-SSXXX-xxxxxx). SS = State Code or International Code, XXX or XXXX = A code assigned to an organizational unit, xxxxx = Alphanumeric with hyphens or periods. The unit identifier portion corresponds to the POINT OF ORIGIN RESPONSIBLE AGENCY UNIT IDENTIFIER (POOResonsibleUnit) from the responsible unit’s corresponding fire report. Example: 2013-CORMP-000001LOCAL_NUM - Local incident identifier (dispatch number). A number or code that uniquely identifies an incident for a particular local fire management organization within a particular calendar year. Field is string to allow for leading zeros when the local incident identifier is less than 6 characters. (IRWIN required). Example: 123456.UNIT_ID - NWCG Unit Identifier of landowner/jurisdictional agency unit at the point of origin of a fire. (NFIRS ID should be used only when no NWCG Unit Identifier exists). Example: CORMPCOMMENTS - Additional information describing the feature. Free Text.FEATURE_CA - Type of wildland fire polygon: Wildfire (represents final fire perimeter or last daily fire perimeter available) or Prescribed Fire or UnknownGEO_ID - Primary key for linking geospatial objects with other database systems. Required for every feature. This field may be renamed for each standard to fit the feature. Globally Unique Identifier (GUID).Cross-Walk from sources (GeoID) and other processing notesAK: GEOID = OBJECT ID of provided file geodatabase (4580 Records thru 2021), other federal sources for AK data removed. CA: GEOID = OBJECT ID of downloaded file geodatabase (12776 Records, federal fires removed, includes RX)FWS: GEOID = OBJECTID of service download combined history 2005-2021 (2052 Records). Handful of WFIGS (11) fires added that were not in FWS record.BIA: GEOID = "FireID" 2017/2018 data (416 records) provided or WFDSS PID (415 records). An additional 917 fires from WFIGS were added, GEOID=GLOBALID in source.NPS: GEOID = EVENT ID (IRWINID or FRM_ID from FOD), 29,943 records includes RX.BLM: GEOID = GUID from BLM FPER and GLOBALID from WFIGS. Date Current = best available modify_date, create_date, fire_cntrl_dt or fire_dscvr_dt to reduce the number of 9999 entries in FireYear. Source FPER (25,389 features), WFIGS (5357 features)USFS: GEOID=GLOBALID in source, 46,574 features. Also fixed Date Current to best available date from perimeterdatetime, revdate, discoverydatetime, dbsourcedate to reduce number of 1899 entries in FireYear.Relevant Websites and ReferencesAlaska Fire Service: https://afs.ak.blm.gov/CALFIRE: https://frap.fire.ca.gov/mapping/gis-dataBIA - data prior to 2017 from WFDSS, 2017-2018 Agency Provided, 2019 and after WFIGSBLM: https://gis.blm.gov/arcgis/rest/services/fire/BLM_Natl_FirePerimeter/MapServerNPS: New data set provided from NPS Fire & Aviation GIS. cross checked against WFIGS for any missing perimeters in 2021.https://nifc.maps.arcgis.com/home/item.html?id=098ebc8e561143389ca3d42be3707caaFWS -https://services.arcgis.com/QVENGdaPbd4LUkLV/arcgis/rest/services/USFWS_Wildfire_History_gdb/FeatureServerUSFS - https://apps.fs.usda.gov/arcx/rest/services/EDW/EDW_FireOccurrenceAndPerimeter_01/MapServerAgency Fire GIS ContactsRD&A Data ManagerVACANTSusan McClendonWFM RD&A GIS Specialist208-258-4244send emailJill KuenziUSFS-NIFC208.387.5283send email Joseph KafkaBIA-NIFC208.387.5572send emailCameron TongierUSFWS-NIFC208.387.5712send emailSkip EdelNPS-NIFC303.969.2947send emailJulie OsterkampBLM-NIFC208.258.0083send email Jennifer L. Jenkins Alaska Fire Service 907.356.5587 send email

  11. Geospatial Deep Learning Seminar Online Course

    • ckan.americaview.org
    Updated Nov 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ckan.americaview.org (2021). Geospatial Deep Learning Seminar Online Course [Dataset]. https://ckan.americaview.org/dataset/geospatial-deep-learning-seminar-online-course
    Explore at:
    Dataset updated
    Nov 2, 2021
    Dataset provided by
    CKANhttps://ckan.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification. The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively. After completing this seminar you will be able to: explain how ANNs work including weights, bias, activation, and optimization. describe and explain different loss and assessment metrics and determine appropriate use cases. use the tensor data model to represent data as input for deep learning. explain how CNNs work including convolutional operations/layers, kernel size, stride, padding, max pooling, activation, and batch normalization. use PyTorch, Python, and R to prepare data, produce and assess scene classification models, and infer to new data. explain common semantic segmentation architectures and how these methods allow for pixel-level classification and how they are different from traditional CNNs. use PyTorch, Python, and R (or ArcGIS Pro) to prepare data, produce and assess semantic segmentation models, and infer to new data.

  12. Sentinel-2 10m Land Use/Land Cover Change from 2018 to 2021 (Mature Support)...

    • hub.arcgis.com
    • pacificgeoportal.com
    • +3more
    Updated Feb 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2022). Sentinel-2 10m Land Use/Land Cover Change from 2018 to 2021 (Mature Support) [Dataset]. https://hub.arcgis.com/datasets/30c4287128cc446b888ca020240c456b
    Explore at:
    Dataset updated
    Feb 10, 2022
    Dataset authored and provided by
    Esrihttp://esri.com/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Important Note: This item is in mature support as of February 2023 and will be retired in December 2025. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version. This layer displays change in pixels of the Sentinel-2 10m Land Use/Land Cover product developed by Esri, Impact Observatory, and Microsoft. Available years to compare with 2021 are 2018, 2019 and 2020. By default, the layer shows all comparisons together, in effect showing what changed 2018-2021. But the layer may be changed to show one of three specific pairs of years, 2018-2021, 2019-2021, or 2020-2021.Showing just one pair of years in ArcGIS Online Map ViewerTo show just one pair of years in ArcGIS Online Map viewer, create a filter. 1. Click the filter button. 2. Next, click add expression. 3. In the expression dialogue, specify a pair of years with the ProductName attribute. Use the following example in your expression dialogue to show only places that changed between 2020 and 2021:ProductNameis2020-2021By default, places that do not change appear as a transparent symbol in ArcGIS Pro. But in ArcGIS Online Map Viewer, a transparent symbol may need to be set for these places after a filter is chosen. To do this:4. Click the styles button. 5. Under unique values click style options. 6. Click the symbol next to No Change at the bottom of the legend. 7. Click the slider next to "enable fill" to turn the symbol off.Showing just one pair of years in ArcGIS ProTo show just one pair of years in ArcGIS Pro, choose one of the layer's processing templates to single out a particular pair of years. The processing template applies a definition query that works in ArcGIS Pro. 1. To choose a processing template, right click the layer in the table of contents for ArcGIS Pro and choose properties. 2. In the dialogue that comes up, choose the tab that says processing templates. 3. On the right where it says processing template, choose the pair of years you would like to display. The processing template will stay applied for any analysis you may want to perform as well.How the change layer was created, combining LULC classes from two yearsImpact Observatory, Esri, and Microsoft used artificial intelligence to classify the world in 10 Land Use/Land Cover (LULC) classes for the years 2017-2021. Mosaics serve the following sets of change rasters in a single global layer: Change between 2018 and 2021Change between 2019 and 2021Change between 2020 and 2021To make this change layer, Esri used an arithmetic operation combining the cells from a source year and 2021 to make a change index value. ((from year * 16) + to year) In the example of the change between 2020 and 2021, the from year (2020) was multiplied by 16, then added to the to year (2021). Then the combined number is served as an index in an 8 bit unsigned mosaic with an attribute table which describes what changed or did not change in that timeframe. Variable mapped: Change in land cover between 2018, 2019, or 2020 and 2021 Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global LULC maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land cover anywhere on Earth. This layer can also be used in analyses that require land cover input. For example, the Zonal Statistics tools allow a user to understand the composition of a specified area by reporting the total estimates for each of the classes. Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world. The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map. Processing platformSentinel-2 L2A/B data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny significant clustering of tall (~15-m or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation,
    clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4. Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5. CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8. Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9. Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields. 10. CloudsNo land cover information due to persistent cloud cover.11. Rangeland Open areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.For questions please email environment@esri.com

  13. California Important Farmland: Most Recent

    • catalog.data.gov
    • data.cnra.ca.gov
    • +8more
    Updated Nov 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Conservation (2024). California Important Farmland: Most Recent [Dataset]. https://catalog.data.gov/dataset/california-important-farmland-most-recent-3057b
    Explore at:
    Dataset updated
    Nov 27, 2024
    Dataset provided by
    California Department of Conservationhttp://www.conservation.ca.gov/
    Area covered
    California
    Description

    This dataset may be a mix of two years and is updated as the data is released for each county. For example, one county may have data from 2014 while a neighboring county may have had a more recent release of 2016 data. For specific years, please check the service that specifies the year, i.e. California Important Farmland: 2016.Established in 1982, Government Code Section 65570 mandates FMMP to biennially report on the conversion of farmland and grazing land, and to provide maps and data to local government and the public.The Farmland Mapping and Monitoring Program (FMMP) provides data to decision makers for use in planning for the present and future use of California's agricultural land resources. The data is a current inventory of agricultural resources. This data is for general planning purposes and has a minimum mapping unit of ten acres.

  14. d

    GIS data and scripts for Colorado Legacy Mine Lands Watershed Delineation...

    • datasets.ai
    • data.usgs.gov
    • +1more
    55
    Updated Aug 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2024). GIS data and scripts for Colorado Legacy Mine Lands Watershed Delineation and Scoring tool (WaDeS) [Dataset]. https://datasets.ai/datasets/gis-data-and-scripts-for-colorado-legacy-mine-lands-watershed-delineation-and-scoring-tool
    Explore at:
    55Available download formats
    Dataset updated
    Aug 8, 2024
    Dataset authored and provided by
    Department of the Interior
    Area covered
    Colorado
    Description

    This data release includes GIS datasets supporting the Colorado Legacy Mine Lands Watershed Delineation and Scoring tool (WaDeS), a web mapping application available at https://geonarrative.usgs.gov/colmlwades/. Water chemistry data were compiled from the U.S. Geological Survey (USGS) National Water Information System (NWIS), U.S. Environmental Protection Agency (EPA) STORET database, and the USGS Central Colorado Assessment Project (CCAP) (Church and others, 2009). The CCAP study area was used for this application. Samples were summarized at each monitoring station and hardness-dependent chronic and acute toxicity thresholds for aquatic life protections under Colorado Regulation No. 31 (CDPHE, 5 CCR 1002-31) for cadmium, copper, lead, and/or zinc were calculated. Samples were scored according to how metal concentrations compared with acute and chronic toxicity thresholds. The results were used in combination with remote sensing derived hydrothermal alteration (Rockwell and Bonham, 2017) and mine-related features (Horton and San Juan, 2016) to identify potential mine remediation sites within the headwaters of the central Colorado mineral belt. Headwaters were defined by watersheds delineated from a 10-meter digital elevation dataset (DEM), ranging in 5-35 square kilometers in size. Python and R scripts used to derive these products are included with this data release as documentation of the processing steps and to enable users to adapt the methods for their own applications. References Church, S.E., San Juan, C.A., Fey, D.L., Schmidt, T.S., Klein, T.L. DeWitt, E.H., Wanty, R.B., Verplanck, P.L., Mitchell, K.A., Adams, M.G., Choate, L.M., Todorov, T.I., Rockwell, B.W., McEachron, Luke, and Anthony, M.W., 2012, Geospatial database for regional environmental assessment of central Colorado: U.S. Geological Survey Data Series 614, 76 p., https://doi.org/10.3133/ds614. Colorado Department of Public Health and Environment (CDPHE), Water Quality Control Commission 5 CCR 1002-31. Regulation No. 31 The Basic Standards and Methodologies for Surface Water. Effective 12/31/2021, accessed on July 28, 2023 at https://cdphe.colorado.gov/water-quality-control-commission-regulations. Horton, J.D., and San Juan, C.A., 2022, Prospect- and mine-related features from U.S. Geological Survey 7.5- and 15-minute topographic quadrangle maps of the United States (ver. 8.0, September 2022): U.S. Geological Survey data release, https://doi.org/10.5066/F78W3CHG. Rockwell, B.W. and Bonham, L.C., 2017, Digital maps of hydrothermal alteration type, key mineral groups, and green vegetation of the western United States derived from automated analysis of ASTER satellite data: U.S. Geological Survey data release, https://doi.org/10.5066/F7CR5RK7.

  15. d

    Tutorial: How to use Google Data Studio and ArcGIS Online to create an...

    • search.dataone.org
    • hydroshare.org
    • +1more
    Updated Apr 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Beganskas (2022). Tutorial: How to use Google Data Studio and ArcGIS Online to create an interactive data portal [Dataset]. http://doi.org/10.4211/hs.9edae0ef99224e0b85303c6d45797d56
    Explore at:
    Dataset updated
    Apr 15, 2022
    Dataset provided by
    Hydroshare
    Authors
    Sarah Beganskas
    Description

    This tutorial will teach you how to take time-series data from many field sites and create a shareable online map, where clicking on a field location brings you to a page with interactive graph(s).

    The tutorial can be completed with a sample dataset (provided via a Google Drive link within the document) or with your own time-series data from multiple field sites.

    Part 1 covers how to make interactive graphs in Google Data Studio and Part 2 covers how to link data pages to an interactive map with ArcGIS Online. The tutorial will take 1-2 hours to complete.

    An example interactive map and data portal can be found at: https://temple.maps.arcgis.com/apps/View/index.html?appid=a259e4ec88c94ddfbf3528dc8a5d77e8

  16. InteragencyFirePerimeterHistory 2010s Grayscale

    • nifc.hub.arcgis.com
    Updated Jul 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Interagency Fire Center (2022). InteragencyFirePerimeterHistory 2010s Grayscale [Dataset]. https://nifc.hub.arcgis.com/maps/interagencyfireperimeterhistory-2010s-grayscale
    Explore at:
    Dataset updated
    Jul 2, 2022
    Dataset authored and provided by
    National Interagency Fire Centerhttps://www.nifc.gov/
    Area covered
    Description

    Interagency Wildland Fire Perimeter History (IFPH)OverviewThis national fire history perimeter data layer of conglomerated agency perimeters was developed in support of the WFDSS application and wildfire decision support. The layer encompasses the fire perimeter datasets of the USDA Forest Service, US Department of Interior Bureau of Land Management, Bureau of Indian Affairs, Fish and Wildlife Service, and National Park Service, the Alaska Interagency Fire Center, CalFire, and WFIGS History. Perimeters are included thru the 2023 fire season. Requirements for fire perimeter inclusion, such as minimum acreage requirements, are set by the contributing agencies. WFIGS, NPS and CALFIRE data now include Prescribed Burns. Data InputSeveral data sources were used in the development of this layer, links are provided where possible below. In addition, many agencies are now using WFIGS as their authoritative source, beginning in mid-2020.Alaska fire history USDA FS Regional Fire History Data BLM Fire Planning and Fuels National Park Service - Includes Prescribed Burns Fish and Wildlife ServiceBureau of Indian AffairsCalFire FRAS - Includes Prescribed BurnsWFIGS - BLM & BIA and other S&LData LimitationsFire perimeter data are often collected at the local level, and fire management agencies have differing guidelines for submitting fire perimeter data. Often data are collected by agencies only once annually. If you do not see your fire perimeters in this layer, they were not present in the sources used to create the layer at the time the data were submitted. A companion service for perimeters entered into the WFDSS application is also available, if a perimeter is found in the WFDSS service that is missing in this Agency Authoritative service or a perimeter is missing in both services, please contact the appropriate agency Fire GIS Contact listed in the table below.AttributesThis dataset implements the NWCG Wildland Fire Perimeters (polygon) data standard.https://www.nwcg.gov/sites/default/files/stds/WildlandFirePerimeters_definition.pdfIRWINID - Primary key for linking to the IRWIN Incident dataset. The origin of this GUID is the wildland fire locations point data layer. (This unique identifier may NOT replace the GeometryID core attribute)INCIDENT - The name assigned to an incident; assigned by responsible land management unit. (IRWIN required). Officially recorded name.FIRE_YEAR (Alias) - Calendar year in which the fire started. Example: 2013. Value is of type integer (FIRE_YEAR_INT).AGENCY - Agency assigned for this fire - should be based on jurisdiction at origin.SOURCE - System/agency source of record from which the perimeter came.DATE_CUR - The last edit, update, or other valid date of this GIS Record. Example: mm/dd/yyyy.MAP_METHOD - Controlled vocabulary to define how the geospatial feature was derived. Map method may help define data quality.GPS-Driven; GPS-Flight; GPS-Walked; GPS-Walked/Driven; GPS-Unknown Travel Method; Hand Sketch; Digitized-Image; Digitized-Topo; Digitized-Other; Image Interpretation; Infrared Image; Modeled; Mixed Methods; Remote Sensing Derived; Survey/GCDB/Cadastral; Vector; OtherGIS_ACRES - GIS calculated acres within the fire perimeter. Not adjusted for unburned areas within the fire perimeter. Total should include 1 decimal place. (ArcGIS: Precision=10; Scale=1). Example: 23.9UNQE_FIRE_ - Unique fire identifier is the Year-Unit Identifier-Local Incident Identifier (yyyy-SSXXX-xxxxxx). SS = State Code or International Code, XXX or XXXX = A code assigned to an organizational unit, xxxxx = Alphanumeric with hyphens or periods. The unit identifier portion corresponds to the POINT OF ORIGIN RESPONSIBLE AGENCY UNIT IDENTIFIER (POOResonsibleUnit) from the responsible unit’s corresponding fire report. Example: 2013-CORMP-000001LOCAL_NUM - Local incident identifier (dispatch number). A number or code that uniquely identifies an incident for a particular local fire management organization within a particular calendar year. Field is string to allow for leading zeros when the local incident identifier is less than 6 characters. (IRWIN required). Example: 123456.UNIT_ID - NWCG Unit Identifier of landowner/jurisdictional agency unit at the point of origin of a fire. (NFIRS ID should be used only when no NWCG Unit Identifier exists). Example: CORMPCOMMENTS - Additional information describing the feature. Free Text.FEATURE_CA - Type of wildland fire polygon: Wildfire (represents final fire perimeter or last daily fire perimeter available) or Prescribed Fire or UnknownGEO_ID - Primary key for linking geospatial objects with other database systems. Required for every feature. This field may be renamed for each standard to fit the feature. Globally Unique Identifier (GUID).Cross-Walk from sources (GeoID) and other processing notesAK: GEOID = OBJECT ID of provided file geodatabase (4580 Records thru 2021), other federal sources for AK data removed. CA: GEOID = OBJECT ID of downloaded file geodatabase (12776 Records, federal fires removed, includes RX)FWS: GEOID = OBJECTID of service download combined history 2005-2021 (2052 Records). Handful of WFIGS (11) fires added that were not in FWS record.BIA: GEOID = "FireID" 2017/2018 data (416 records) provided or WFDSS PID (415 records). An additional 917 fires from WFIGS were added, GEOID=GLOBALID in source.NPS: GEOID = EVENT ID (IRWINID or FRM_ID from FOD), 29,943 records includes RX.BLM: GEOID = GUID from BLM FPER and GLOBALID from WFIGS. Date Current = best available modify_date, create_date, fire_cntrl_dt or fire_dscvr_dt to reduce the number of 9999 entries in FireYear. Source FPER (25,389 features), WFIGS (5357 features)USFS: GEOID=GLOBALID in source, 46,574 features. Also fixed Date Current to best available date from perimeterdatetime, revdate, discoverydatetime, dbsourcedate to reduce number of 1899 entries in FireYear.Relevant Websites and ReferencesAlaska Fire Service: https://afs.ak.blm.gov/CALFIRE: https://frap.fire.ca.gov/mapping/gis-dataBIA - data prior to 2017 from WFDSS, 2017-2018 Agency Provided, 2019 and after WFIGSBLM: https://gis.blm.gov/arcgis/rest/services/fire/BLM_Natl_FirePerimeter/MapServerNPS: New data set provided from NPS Fire & Aviation GIS. cross checked against WFIGS for any missing perimetersFWS -https://services.arcgis.com/QVENGdaPbd4LUkLV/arcgis/rest/services/USFWS_Wildfire_History_gdb/FeatureServerUSFS - https://apps.fs.usda.gov/arcx/rest/services/EDW/EDW_FireOccurrenceAndPerimeter_01/MapServer

  17. C

    DOMI Street Closures For GIS Mapping

    • data.wprdc.org
    csv, html
    Updated Jul 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Pittsburgh (2025). DOMI Street Closures For GIS Mapping [Dataset]. https://data.wprdc.org/dataset/street-closures
    Explore at:
    csv, htmlAvailable download formats
    Dataset updated
    Jul 14, 2025
    Dataset provided by
    City of Pittsburgh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    This dataset contains all DOMI Street Closure Permit data in the Computronix (CX) system from the date of its adoption (in May 2020) until the present. The data in each record can be used to determine when street closures are occurring, who is requesting these closures, why the closure is being requested, and for mapping the closures themselves. It is updated hourly (as of March 2024).

    Preprocessing/Formatting

    It is important to distinguish between a permit, a permit's street closure(s), and the roadway segments that are referenced to that closure(s).

    • The CX system identifies a street in segments of roadway. (As an example, the CX system could divide Maple Street into multiple segments.)

    • A single street closure may span multiple segments of a street.

    • The street closure permit refers to all the component line segments.

    • A permit may have multiple streets which are closed. Street closure permits often reference many segments of roadway.

    The roadway_id field is a unique GIS line segment representing the aforementioned segments of road. The roadway_id values are assigned internally by the CX system and are unlikely to be known by the permit applicant. A section of roadway may have multiple permits issued over its lifespan. Therefore, a given roadway_id value may appear in multiple permits.

    The field closure_id represents a unique ID for each closure, and permit_id uniquely identifies each permit. This is in contrast to the aforementioned roadway_id field which, again, is a unique ID only for the roadway segments.

    City teams that use this data requested that each segment of each street closure permit be represented as a unique row in the dataset. Thus, a street closure permit that refers to three segments of roadway would be represented as three rows in the table. Aside from the roadway_id field, most other data from that permit pertains equally to those three rows. Thus, the values in most fields of the three records are identical.

    Each row has the fields segment_num and total_segments which detail the relationship of each record, and its corresponding permit, according to street segment. The above example produced three records for a single permit. In this case, total_segments would equal 3 for each record. Each of those records would have a unique value between 1 and 3.

    The geometry field consists of string values of lat/long coordinates, which can be used to map the street segments.

    All string text (most fields) were converted to UPPERCASE data. Most of the data are manually entered and often contain non-uniform formatting. While several solutions for cleaning the data exist, text were transformed to UPPERCASE to provide some degree of regularization. Beyond that, it is recommended that the user carefully think through cleaning any unstructured data, as there are many nuances to consider. Future improvements to this ETL pipeline may approach this problem with a more sophisticated technique.

    Known Uses

    These data are used by DOMI to track the status of street closures (and associated permits).

    Further Documentation and Resources

    An archived dataset containing historical street closure records (from before May of 2020) for the City of Pittsburgh may be found here: https://data.wprdc.org/dataset/right-of-way-permits

  18. Sentinel-2 10m Land Use/Land Cover Time Series

    • colorado-river-portal.usgs.gov
    • pacificgeoportal.com
    • +9more
    Updated Oct 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2022). Sentinel-2 10m Land Use/Land Cover Time Series [Dataset]. https://colorado-river-portal.usgs.gov/datasets/esri::sentinel-2-10m-land-use-land-cover-time-series-1
    Explore at:
    Dataset updated
    Oct 19, 2022
    Dataset authored and provided by
    Esrihttp://esri.com/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2024 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2024. Key Properties Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryAnalysis: Optimized for analysisClass Definitions: ValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Usage Information and Best PracticesProcessing TemplatesThis layer includes a number of preconfigured processing templates (raster function templates) to provide on-the-fly data rendering and class isolation for visualization and analysis. Each processing template includes labels and descriptions to characterize the intended usage. This may include for visualization, for analysis, or for both visualization and analysis. VisualizationThe default rendering on this layer displays all classes.There are a number of on-the-fly renderings/processing templates designed specifically for data visualization.By default, the most recent year is displayed. To discover and isolate specific years for visualization in Map Viewer, try using the Image Collection Explorer. AnalysisIn order to leverage the optimization for analysis, the capability must be enabled by your ArcGIS organization administrator. More information on enabling this feature can be found in the ‘Regional data hosting’ section of this help doc.Optimized for analysis means this layer does not have size constraints for analysis and it is recommended for multisource analysis with other layers optimized for analysis. See this group for a complete list of imagery layers optimized for analysis.Prior to running analysis, users should always provide some form of data selection with either a layer filter (e.g. for a specific date range, cloud cover percent, mission, etc.) or by selecting specific images. To discover and isolate specific images for analysis in Map Viewer, try using the Image Collection Explorer.Zonal Statistics is a common tool used for understanding the composition of a specified area by reporting the total estimates for each of the classes. GeneralIf you are new to Sentinel-2 LULC, the Sentinel-2 Land Cover Explorer provides a good introductory user experience for working with this imagery layer. For more information, see this Quick Start Guide.Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch. CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.

  19. c

    Protected Areas Exclusion (Solar)

    • gis.data.cnra.ca.gov
    • data.ca.gov
    • +4more
    Updated Mar 3, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Energy Commission (2023). Protected Areas Exclusion (Solar) [Dataset]. https://gis.data.cnra.ca.gov/datasets/CAEnergy::protected-areas-exclusion-solar-1
    Explore at:
    Dataset updated
    Mar 3, 2023
    Dataset authored and provided by
    California Energy Commission
    Area covered
    Description

    The geospatial data reflected in the protected area layer mostly pertain to natural and wilderness areas where development of utility-scale renewable energy is prohibited and were heavily based on RETI 1.0 blackout areas.1 The protected area layer is distinguished for solar PV technology by the BLM greater sage grouse habitat management area which provides separate exclusion areas for the different technology types. Tables 1 and 2 below lists the data sources and precise selection query for each dataset, if applicable, that make up the protected area layer.Table 1: Datasets used in the Protected Area Layer

    Dataset

    Example Designations

    Citation or hyperlink

    PAD-US (CBI Edition)

    National Parks, GAP Status 1 and 2, State Parks, Open Spaces, Natural Areas

    “PAD-US (CBI Edition) Version 2.1b, California”. Conservation Biology Institute. 2016. https://databasin.org/datasets/64538491f43e42ba83e26b849f2cad28.

    Conservation Easements

    California Conservation Easement Database (CCED), 2022a. 2022. www.CALands.org. Accessed December 2022.

    Inventoried Roadless Areas

    “Inventoried Roadless Areas.” US Forest Service. Dec 12, 2022. https://www.fs.usda.gov/detail/roadless/2001roadlessrule/maps/?cid=stelprdb5382437

    BLM National Landscape Conservation System

    Wilderness Areas, Wilderness Study Areas, National Monuments, National Conservation Lands, Conservation Lands of the California Desert, Scenic Rivers

    https://gbp-blm-egis.hub.arcgis.com/datasets/BLM-EGIS::blm-ca-wilderness-areas

    https://gbp-blm-egis.hub.arcgis.com/datasets/BLM-EGIS::blm-ca-wilderness-study-areas

    https://gbp-blm-egis.hub.arcgis.com/datasets/BLM-EGIS::blm-ca-national-monuments-nca-forest-reserves-other-poly/

    Greater Sage Grouse Habitat Conservation Areas (BLM)

    For solar technology: BLM_Managm IN (‘PHMA’, ‘GHMA’, ‘OHMA’) For wind technology: BLMP_Managm = ‘PHMA’

    “Nevada and Northeastern California Greater Sage-Grouse Approved Resource Management Plan Amendment.” US Department of the Interior Bureau of Land Management Nevada State Office. 2015. https://eplanning.blm.gov/public_projects/lup/103343/143707/176908/NVCA_Approved_RMP_Amendment.pdf

    Other BLM Protected Areas

    Areas of Critical Environmental Concern (ACECs), Recreation Areas (SRMA, ERMA, OHV Designated Areas), including Vinagre Wash Special Recreation Management Area, National Scenic Areas, including Alabama Hills National Scenic Area

    https://gbp-blm-egis.hub.arcgis.com/datasets/BLM-EGIS::blm-ca-off-highway-vehicle-designations

    https://gbp-blm-egis.hub.arcgis.com/datasets/BLM-EGIS::blm-ca-areas-of-critical-environmental-concern

    BLM, personal communication, November 2, 2022.

    Mono Basin NFSA

    https://pcta.maps.arcgis.com/home/item.html?id=cf1495f8e09940989995c06f9e290f6b#overview

    Terrestrial 30x30 Conserved Areas

    Gap Status 1 and 2

    CA Nature. 30x30 Conserved Areas, Terrestrial. 2021. https://www.californianature.ca.gov/datasets/CAnature::30x30-conserved-areas-terrestrial/ Accessed September 2022.

    CPAD

    Open Spaces and Parks under city or county level

    California Protected Areas Database (CPAD), 2022b. 2022. https://www.calands.org/cpad/. Accessed February 22, 2023.

    USFS Special Interest Management Areas

    Research Natural Areas, Recreation Areas, National Recreational Trail, Experimental Forest, Scenic Area

    https://data-usfs.hub.arcgis.com/datasets/usfs::special-interest-management-areas-feature-layer/about

    Proposed Protected Area

    Molok Luyuk Extension (Berryessa Mtn NM Expansion)

    CalWild, personal communication, January 19, 2023.

    Table 2: Query Definition for Components of Protected Areas Dataset SQL Query PAD-US (CBI Edition) p_des_tp IN ('Wild, Scenic and Recreation River', 'Area of Critical Environmental Concern', 'Ecological Reserve', 'National Conservation Area', 'National Historic Site', 'National Historical Park', 'National Monument', 'National Park General Public Land', 'National Preserve', 'National Recreation Area', 'National Scenic Area', 'National Seashore', 'Wilderness Study Area', 'Wilderness Area', 'Wildlife Management Area', 'State Wildlife Management Area', 'State Park', 'State Recreation Area', 'State Nature Preserve/Reserve', 'State Natural Area', 'State Ecological Reserve', 'State Cultural/Historic Area', 'State Beach', 'Special Management Area', 'National Wildlife Refuge', 'Natural Area', 'Nature Preserve', 'Research Natural Area') Or s_des_tp IN ('Natioanal Monument', 'National Monument', 'National Park General Public Land', 'National Preserve', 'National Recreation Area', 'National Scenic Area', 'National Seashore', 'National Conservation Area', 'Area of Critical Environmental Concern', 'National Wildlife Refuge', 'State Park', 'State Wildlife Area', 'State Wildlife Management Area', 'State Wildlife Refuge', 'State Ecological Reserve', 'Wild, Scenic and Recreation River', 'Wilderness Area', 'Wildlife Management Area') Or t_des_tp IN ('National Monument', 'National Park General Public Land', 'National Recreation Area', 'Area of Critical Environmental Concern', 'National Conservation Area', 'State Wildlife Management Area', 'Wild, Scenic and Recreation River', 'Wildlife Management Area') Or p_loc_ds IN ('Ecological Reserve', 'Research and Educational Land') Or gap_sts IN ('1', '2') Or own_type = 'Private Conservation Land' Or (own_type = 'Local Land' And (p_des_tp LIKE '%"Open Space"%' Or p_des_tp LIKE '%Park%' Or p_des_tp LIKE '%Recreation Area%' Or p_des_tp LIKE '%Natural Area%')) Or (p_des_tp = 'Other State Land' And (p_loc_ds IN ('State Vehicular Recreation Area', 'BLM Resource Management Area', 'Resource Management Area') And gap_sts <> '2')) CPAD AGNCY_LEV IN ('City', 'County') And ACCESS_TYP = 'Open Access' And (UNIT_NAME LIKE '%Park%' OR UNIT_NAME LIKE '%Open Space%' OR UNIT_NAME LIKE '%park%' OR UNIT_NAME LIKE '%Recreation Area%' OR UNIT_NAME LIKE '%Natural Area%' OR GAP2_acres > 0 OR GAP1_acres >0) Greater Sage- Grouse Habitat Conservation Areas (BLM) For Solar Technology: BLM_Managm IN (‘PHMA’, ‘GHMA’, ‘OHMA’) For Wind Technology: BLM_Managm = ‘PHMA’ This layer is featured in the CEC 2023 Land-Use Screens for Electric System Planning data viewer.For a complete description of the creation of this layer and its use in electric system planning, please refer to the Land Use Screens Staff Report in the CEC Energy Planning Library.[1] Final RETI Phase 2A report, available at https://ww2.energy.ca.gov/2009publications/RETI-1000-2009-001/RETI-1000-2009-001-F-REV2.PDF.

    Change Log: Version 1.1 (January 22, 2024 10:29 AM) Layer revised to allow for gaps to remain when combining all components of the protected area layer.

  20. W

    USA Flood Hazard Areas

    • wifire-data.sdsc.edu
    • gis-calema.opendata.arcgis.com
    • +1more
    csv, esri rest +4
    Updated Jul 14, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CA Governor's Office of Emergency Services (2020). USA Flood Hazard Areas [Dataset]. https://wifire-data.sdsc.edu/dataset/usa-flood-hazard-areas
    Explore at:
    geojson, csv, kml, esri rest, html, zipAvailable download formats
    Dataset updated
    Jul 14, 2020
    Dataset provided by
    CA Governor's Office of Emergency Services
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    United States
    Description
    The Federal Emergency Management Agency (FEMA) produces Flood Insurance Rate maps and identifies Special Flood Hazard Areas as part of the National Flood Insurance Program's floodplain management. Special Flood Hazard Areas have regulations that include the mandatory purchase of flood insurance.

    Dataset Summary

    Phenomenon Mapped: Flood Hazard Areas
    Coordinate System: Web Mercator Auxiliary Sphere
    Extent: 50 United States plus Puerto Rico, the US Virgin Islands, Guam, the Northern Mariana Islands and American Samoa
    Visible Scale: The layer is limited to scales of 1:1,000,000 and larger. Use the USA Flood Hazard Areas imagery layer for smaller scales.
    Publication Date: April 1, 2019

    This layer is derived from the April 1, 2019 version of the National Flood Hazard Layer feature class S_Fld_Haz_Ar. The data were aggregated into eight classes to produce the Esri Symbology field based on symbology provided by FEMA. All other layer attributes are derived from the National Flood Hazard Layer. The layer was projected to Web Mercator Auxiliary Sphere and the resolution set to 1 meter.

    To improve performance Flood Zone values "Area Not Included", "Open Water", "D", "NP", and No Data were removed from the layer. Areas with Flood Zone value "X" subtype "Area of Minimal Flood Hazard" were also removed. An imagery layer created from this dataset provides access to the full set of records in the National Flood Hazard Layer.

    A web map featuring this layer is available for you to use.

    What can you do with this Feature Layer?

    Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.

    ArcGIS Online
    • Add this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but an imagery layer created from the same data can be used at smaller scales to produce a webmap that displays across the full range of scales. The layer or a map containing it can be used in an application.
    • Change the layer’s transparency and set its visibility range
    • Open the layer’s attribute table and make selections and apply filters. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.
    • Change the layer’s style and filter the data. For example, you could change the symbology field to Special Flood Hazard Area and set a filter for = “T” to create a map of only the special flood hazard areas.
    • Add labels and set their properties
    • Customize the pop-up
    ArcGIS Pro
    • Add this layer to a 2d or 3d map. The same scale limit as Online applies in Pro
    • Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Areas up to 1,000-2,000 features can be exported successfully.
    • Change the symbology and the attribute field used to symbolize the data
    • Open table and make interactive selections with the map
    • Modify the pop-ups
    • Apply Definition Queries to create sub-sets of the layer
    This layer is part of the Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2008). Converting analog interpretive data to digital formats for use in database and GIS applications [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/ed9bb80881c64dc38dfc614d7d454022/html

Converting analog interpretive data to digital formats for use in database and GIS applications

ScienceBase Item Summary Page

Explore at:
Dataset updated
Jun 6, 2008
Description

Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information

Search
Clear search
Close search
Google apps
Main menu