This data release contains the analytical results and evaluated source data files of geospatial analyses for identifying areas in Alaska that may be prospective for different types of lode gold deposits, including orogenic, reduced-intrusion-related, epithermal, and gold-bearing porphyry. The spatial analysis is based on queries of statewide source datasets of aeromagnetic surveys, Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. LodeGold_Results_gdb.zip - The analytical results in geodatabase polygon feature classes which contain the scores for each source dataset layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for a deposit type within the HUC. The data is described by FGDC metadata. An mxd file, and cartographic feature classes are provided for display of the results in ArcMap. An included README file describes the complete contents of the zip file. 2. LodeGold_Results_shape.zip - Copies of the results from the geodatabase are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file. 3. LodeGold_SourceData_gdb.zip - The source datasets in geodatabase and geotiff format. Data layers include aeromagnetic surveys, AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds. The data is described by FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are the python scripts used to perform the analyses. Users may modify the scripts to design their own analyses. The included README files describe the complete contents of the zip file and explain the usage of the scripts. 4. LodeGold_SourceData_shape.zip - Copies of the geodatabase source dataset derivatives from ARDF and lithology from SIM3340 created for this analysis are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about books. It has 1 row and is filtered where the book is Learning GIS using open source software : an applied guide for geo-spatial analysis. It features 7 columns including author, publication date, language, and book publisher.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The research focus in the field of remotely sensed imagery has shifted from collection and warehousing of data ' tasks for which a mature technology already exists, to auto-extraction of information and knowledge discovery from this valuable resource ' tasks for which technology is still under active development. In particular, intelligent algorithms for analysis of very large rasters, either high resolutions images or medium resolution global datasets, that are becoming more and more prevalent, are lacking. We propose to develop the Geospatial Pattern Analysis Toolbox (GeoPAT) a computationally efficient, scalable, and robust suite of algorithms that supports GIS processes such as segmentation, unsupervised/supervised classification of segments, query and retrieval, and change detection in giga-pixel and larger rasters. At the core of the technology that underpins GeoPAT is the novel concept of pattern-based image analysis. Unlike pixel-based or object-based (OBIA) image analysis, GeoPAT partitions an image into overlapping square scenes containing 1,000'100,000 pixels and performs further processing on those scenes using pattern signature and pattern similarity ' concepts first developed in the field of Content-Based Image Retrieval. This fusion of methods from two different areas of research results in orders of magnitude performance boost in application to very large images without sacrificing quality of the output.
GeoPAT v.1.0 already exists as the GRASS GIS add-on that has been developed and tested on medium resolution continental-scale datasets including the National Land Cover Dataset and the National Elevation Dataset. Proposed project will develop GeoPAT v.2.0 ' much improved and extended version of the present software. We estimate an overall entry TRL for GeoPAT v.1.0 to be 3-4 and the planned exit TRL for GeoPAT v.2.0 to be 5-6. Moreover, several new important functionalities will be added. Proposed improvements includes conversion of GeoPAT from being the GRASS add-on to stand-alone software capable of being integrated with other systems, full implementation of web-based interface, writing new modules to extent it applicability to high resolution images/rasters and medium resolution climate data, extension to spatio-temporal domain, enabling hierarchical search and segmentation, development of improved pattern signature and their similarity measures, parallelization of the code, implementation of divide and conquer strategy to speed up selected modules.
The proposed technology will contribute to a wide range of Earth Science investigations and missions through enabling extraction of information from diverse types of very large datasets. Analyzing the entire dataset without the need of sub-dividing it due to software limitations offers important advantage of uniformity and consistency. We propose to demonstrate the utilization of GeoPAT technology on two specific applications. The first application is a web-based, real time, visual search engine for local physiography utilizing query-by-example on the entire, global-extent SRTM 90 m resolution dataset. User selects region where process of interest is known to occur and the search engine identifies other areas around the world with similar physiographic character and thus potential for similar process. The second application is monitoring urban areas in their entirety at the high resolution including mapping of impervious surface and identifying settlements for improved disaggregation of census data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Regularly scheduled tow-away zone GIS data’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/97054e35-2ad3-4c9d-aec2-91a4368ef4fe on 26 January 2022.
--- Dataset description provided by original source is as follows ---
This dataset contains locations and schedules of regular tow-away zones which apply at the blockface-level in San Francisco. It does not include temporary street closures which could result in towing. The dataset contains:Geospatial information for blockfaces with known tow schedulesTow schedules with starting and ending hours and days applicableAddress ranges for the blockface segmentThe centerline identifier of the street segment on which the blockface occursNotes, if known, to enhance the information about the regulation.
This dataset was compiled in October and November of 2011. It reflects legislated changes through November 1, 2011. It is at least 95% accurate and may not include all blockface-level tow-away zones with regular, weekly schedules. Please email corrections or discrepancies to info@sfpark.org. Always look for signage near your parking space and follow posted regulations to avoid parking citations and possible towage. See http://sfpark.org/resources/regularly-scheduled-tow-away-zone-gis-data/ for more.
--- Original source retains full ownership of the source dataset ---
These products were developed to provide scientific and correspondingly spatially explicit information regarding the distribution and abundance of conifers (namely, singleleaf pinyon (Pinus monophylla), Utah juniper (Juniperus osteosperma), and western juniper (Juniperus occidentalis)) in Nevada and portions of northeastern California. Encroachment of these trees into sagebrush ecosystems of the Great Basin can present a threat to populations of greater sage-grouse (Centrocercus urophasianus). These data provide land managers and other interested parties with a high-resolution representation of conifers across the range of sage-grouse habitat in Nevada and northeastern California that can be used for a variety of management and research applications. We mapped conifer trees at 1 x 1 meter resolution across the extent of all Nevada Department of Wildlife Sage-grouse Population Management Units plus a 10 km buffer. Using 2010 and 2013 National Agriculture Imagery Program digital orthophoto quads (DOQQs) as our reference imagery, we applied object-based image analysis with Feature Analyst software (Overwatch, 2013) to classify conifer features across our study extent. This method relies on machine learning algorithms that extract features from imagery based on their spectral and spatial signatures. Conifers in 6230 DOQQs were classified and outputs were then tested for errors of omission and commission using stratified random sampling. Results of the random sampling were used to populate a confusion matrix and calculate the overall map accuracy of 84.3 percent. We provide 5 sets of products for this mapping process across the entire mapping extent: (1) a shapefile representing accuracy results linked to our mapping subunits; (2) binary rasters representing conifer presence or absence at a 1 x 1 meter resolution; (3) a 30 x 30 meter resolution raster representing percentage of conifer canopy cover within each cell from 0 to 100; (4) 1 x 1 meter resolution canopy cover classification rasters derived from a 50 meter radius moving window analysis; and (5) a raster prioritizing pinyon-juniper management for sage-grouse habitat restoration efforts. The latter three products can be reclassified into user-specified bins to meet different management or study objectives, which include approximations for phases of encroachment. These products complement, and in some cases improve upon, existing conifer maps in the western United States, and will help facilitate sage-grouse habitat management and sagebrush ecosystem restoration. These data support the following publication: Coates, P.S., Gustafson, K.B., Roth, C.L., Chenaille, M.P., Ricca, M.A., Mauch, Kimberly, Sanchez-Chopitea, Erika, Kroger, T.J., Perry, W.M., and Casazza, M.L., 2017, Using object-based image analysis to conduct high-resolution conifer extraction at regional spatial scales: U.S. Geological Survey Open-File Report 2017-1093, 40 p., https://doi.org/10.3133/ofr20171093. References: ESRI, 2013, ArcGIS Desktop: Release 10.2: Environmental Systems Research Institute. Overwatch, 2013, Feature Analyst Version 5.1.2.0 for ArcGIS: Overwatch Systems Ltd.
The establishment of a BES Multi-User Geodatabase (BES-MUG) allows for the storage, management, and distribution of geospatial data associated with the Baltimore Ecosystem Study. At present, BES data is distributed over the internet via the BES website. While having geospatial data available for download is a vast improvement over having the data housed at individual research institutions, it still suffers from some limitations. BES-MUG overcomes these limitations; improving the quality of the geospatial data available to BES researches, thereby leading to more informed decision-making. BES-MUG builds on Environmental Systems Research Institute's (ESRI) ArcGIS and ArcSDE technology. ESRI was selected because its geospatial software offers robust capabilities. ArcGIS is implemented agency-wide within the USDA and is the predominant geospatial software package used by collaborating institutions. Commercially available enterprise database packages (DB2, Oracle, SQL) provide an efficient means to store, manage, and share large datasets. However, standard database capabilities are limited with respect to geographic datasets because they lack the ability to deal with complex spatial relationships. By using ESRI's ArcSDE (Spatial Database Engine) in conjunction with database software, geospatial data can be handled much more effectively through the implementation of the Geodatabase model. Through ArcSDE and the Geodatabase model the database's capabilities are expanded, allowing for multiuser editing, intelligent feature types, and the establishment of rules and relationships. ArcSDE also allows users to connect to the database using ArcGIS software without being burdened by the intricacies of the database itself. For an example of how BES-MUG will help improve the quality and timeless of BES geospatial data consider a census block group layer that is in need of updating. Rather than the researcher downloading the dataset, editing it, and resubmitting to through ORS, access rules will allow the authorized user to edit the dataset over the network. Established rules will ensure that the attribute and topological integrity is maintained, so that key fields are not left blank and that the block group boundaries stay within tract boundaries. Metadata will automatically be updated showing who edited the dataset and when they did in the event any questions arise. Currently, a functioning prototype Multi-User Database has been developed for BES at the University of Vermont Spatial Analysis Lab, using Arc SDE and IBM's DB2 Enterprise Database as a back end architecture. This database, which is currently only accessible to those on the UVM campus network, will shortly be migrated to a Linux server where it will be accessible for database connections over the Internet. Passwords can then be handed out to all interested researchers on the project, who will be able to make a database connection through the Geographic Information Systems software interface on their desktop computer. This database will include a very large number of thematic layers. Those layers are currently divided into biophysical, socio-economic and imagery categories. Biophysical includes data on topography, soils, forest cover, habitat areas, hydrology and toxics. Socio-economics includes political and administrative boundaries, transportation and infrastructure networks, property data, census data, household survey data, parks, protected areas, land use/land cover, zoning, public health and historic land use change. Imagery includes a variety of aerial and satellite imagery. See the readme: http://96.56.36.108/geodatabase_SAL/readme.txt See the file listing: http://96.56.36.108/geodatabase_SAL/diroutput.txt
This dataset contains the combination of geology data (geologic units, faults, folds, and dikes) from 6 1;100,000 scale digital coverages in eastern Washington (Chewelah, Colville, Omak, Oroville, Nespelem, Republic). The data was converted to an Arc grid in ArcView using the Spatial Analyst extension.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The present dataset provides necessary indicators of the climate change vulnerability of Bangladesh in raster form. Geospatial databases have been created in Geographic Information System (GIS) environment mainly from two types of raw data; socioeconomic data from the Bangladesh Bureau of Statistics (BBS) and biophysical maps from various government and non-government agencies. Socioeconomic data have been transformed into a raster database through the Inverse Distance Weighted (IDW) interpolation method in GIS. On the other hand, biophysical maps have been directly recreated as GIS feature classes and eventually, the biophysical raster database has been produced. 30 socioeconomic indicators have been considered, which has been obtained from the Bangladesh Bureau of Statistics. All socioeconomic data were incorporated into the GIS database to generate maps. However, the units of some variables have been adopted directly from BBS, some have been normalized based on population, and some have been adopted as percentages. 12 biophysical system indicators have also been classified based on the collected information from different sources and literature. Biophysical maps are mainly classified in relative scales according to the intensity. These geospatial datasets have been analyzed to assess the spatial vulnerability of Bangladesh to climate change and extremes. The analysis has resulted in a climate change vulnerability map of Bangladesh with recognized hotspots, significant vulnerability factors, and adaptation measures to reduce the level of vulnerability.
Xverum’s Point of Interest (POI) Data is a comprehensive dataset containing 230M+ verified locations across 5000 business categories. Our dataset delivers structured geographic data, business attributes, location intelligence, and mapping insights, making it an essential tool for GIS applications, market research, urban planning, and competitive analysis.
With regular updates and continuous POI discovery, Xverum ensures accurate, up-to-date information on businesses, landmarks, retail stores, and more. Delivered in bulk to S3 Bucket and cloud storage, our dataset integrates seamlessly into mapping, geographic information systems, and analytics platforms.
🔥 Key Features:
Extensive POI Coverage: ✅ 230M+ Points of Interest worldwide, covering 5000 business categories. ✅ Includes retail stores, restaurants, corporate offices, landmarks, and service providers.
Geographic & Location Intelligence Data: ✅ Latitude & longitude coordinates for mapping and navigation applications. ✅ Geographic classification, including country, state, city, and postal code. ✅ Business status tracking – Open, temporarily closed, or permanently closed.
Continuous Discovery & Regular Updates: ✅ New POIs continuously added through discovery processes. ✅ Regular updates ensure data accuracy, reflecting new openings and closures.
Rich Business Insights: ✅ Detailed business attributes, including company name, category, and subcategories. ✅ Contact details, including phone number and website (if available). ✅ Consumer review insights, including rating distribution and total number of reviews (additional feature). ✅ Operating hours where available.
Ideal for Mapping & Location Analytics: ✅ Supports geospatial analysis & GIS applications. ✅ Enhances mapping & navigation solutions with structured POI data. ✅ Provides location intelligence for site selection & business expansion strategies.
Bulk Data Delivery (NO API): ✅ Delivered in bulk via S3 Bucket or cloud storage. ✅ Available in structured format (.json) for seamless integration.
🏆Primary Use Cases:
Mapping & Geographic Analysis: 🔹 Power GIS platforms & navigation systems with precise POI data. 🔹 Enhance digital maps with accurate business locations & categories.
Retail Expansion & Market Research: 🔹 Identify key business locations & competitors for market analysis. 🔹 Assess brand presence across different industries & geographies.
Business Intelligence & Competitive Analysis: 🔹 Benchmark competitor locations & regional business density. 🔹 Analyze market trends through POI growth & closure tracking.
Smart City & Urban Planning: 🔹 Support public infrastructure projects with accurate POI data. 🔹 Improve accessibility & zoning decisions for government & businesses.
💡 Why Choose Xverum’s POI Data?
Access Xverum’s 230M+ POI dataset for mapping, geographic analysis, and location intelligence. Request a free sample or contact us to customize your dataset today!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5
If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD
The following text is a summary of the information in the above Data Descriptor.
The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.
The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.
These maps represent a unique global representation of physical access to essential services offered by cities and ports.
The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).
travel_time_to_ports_x (x ranges from 1 to 5)
The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.
Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes
Data type Byte (16 bit Unsigned Integer)
No data value 65535
Flags None
Spatial resolution 30 arc seconds
Spatial extent
Upper left -180, 85
Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)
Temporal resolution 2015
Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.
Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.
The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.
Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points
The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).
Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.
Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.
This process and results are included in the validation zip file.
Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.
The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.
The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.
The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.
RTB Maps is a cloud-based electronic Atlas. We used ArGIS 10 for Desktop with Spatial Analysis Extension, ArcGIS 10 for Server on-premise, ArcGIS API for Javascript, IIS web services based on .NET, and ArcGIS Online combining data on the cloud with data and applications on our local server to develop an Atlas that brings together many of the map themes related to development of roots, tubers and banana crops. The Atlas is structured to allow our participating scientists to understand the distribution of the crops and observe the spatial distribution of many of the obstacles to production of these crops. The Atlas also includes an application to allow our partners to evaluate the importance of different factors when setting priorities for research and development. The application uses weighted overlay analysis within a multi-criteria decision analysis framework to rate the importance of factors when establishing geographic priorities for research and development.Datasets of crop distribution maps, agroecology maps, biotic and abiotic constraints to crop production, poverty maps and other demographic indicators are used as a key inputs to multi-objective criteria analysis.Further metadata/references can be found here: http://gisweb.ciat.cgiar.org/RTBmaps/DataAvailability_RTBMaps.htmlDISCLAIMER, ACKNOWLEDGMENTS AND PERMISSIONS:This service is provided by Roots, Tubers and Bananas CGIAR Research Program as a public service. Use of this service to retrieve information constitutes your awareness and agreement to the following conditions of use.This online resource displays GIS data and query tools subject to continuous updates and adjustments. The GIS data has been taken from various, mostly public, sources and is supplied in good faith.RTBMaps GIS Data Disclaimer• The data used to show the Base Maps is supplied by ESRI.• The data used to show the photos over the map is supplied by Flickr.• The data used to show the videos over the map is supplied by Youtube.• The population map is supplied to us by CIESIN, Columbia University and CIAT.• The Accessibility map is provided by Global Environment Monitoring Unit - Joint Research Centre of the European Commission. Accessibility maps are made for a specific purpose and they cannot be used as a generic dataset to represent "the accessibility" for a given study area.• Harvested area and yield for banana, cassava, potato, sweet potato and yam for the year 200, is provided by EarthSat (University of Minnesota’s Institute on the Environment-Global Landscapes initiative and McGill University’s Land Use and the Global Environment lab). Dataset from Monfreda C., Ramankutty N., and Foley J.A. 2008.• Agroecology dataset: global edapho-climatic zones for cassava based on mean growing season, temperature, number of dry season months, daily temperature range and seasonality. Dataset from CIAT (Carter et al. 1992)• Demography indicators: Total and Rural Population from Center for International Earth Science Information Network (CIESIN) and CIAT 2004.• The FGGD prevalence of stunting map is a global raster datalayer with a resolution of 5 arc-minutes. The percentage of stunted children under five years old is reported according to the lowest available sub-national administrative units: all pixels within the unit boundaries will have the same value. Data have been compiled by FAO from different sources: Demographic and Health Surveys (DHS), UNICEF MICS, WHO Global Database on Child Growth and Malnutrition, and national surveys. Data provided by FAO – GIS Unit 2007.• Poverty dataset: Global poverty headcount and absolute number of poor. Number of people living on less than $1.25 or $2.00 per day. Dataset from IFPRI and CIATTHE RTBMAPS GROUP MAKES NO WARRANTIES OR GUARANTEES, EITHER EXPRESSED OR IMPLIED AS TO THE COMPLETENESS, ACCURACY, OR CORRECTNESS OF THE DATA PORTRAYED IN THIS PRODUCT NOR ACCEPTS ANY LIABILITY, ARISING FROM ANY INCORRECT, INCOMPLETE OR MISLEADING INFORMATION CONTAINED THEREIN. ALL INFORMATION, DATA AND DATABASES ARE PROVIDED "AS IS" WITH NO WARRANTY, EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, FITNESS FOR A PARTICULAR PURPOSE. By accessing this website and/or data contained within the databases, you hereby release the RTB group and CGCenters, its employees, agents, contractors, sponsors and suppliers from any and all responsibility and liability associated with its use. In no event shall the RTB Group or its officers or employees be liable for any damages arising in any way out of the use of the website, or use of the information contained in the databases herein including, but not limited to the RTBMaps online Atlas product.APPLICATION DEVELOPMENT:• Desktop and web development - Ernesto Giron E. (GeoSpatial Consultant) e.giron.e@gmail.com• GIS Analyst - Elizabeth Barona. (Independent Consultant) barona.elizabeth@gmail.comCollaborators:Glenn Hyman, Bernardo Creamer, Jesus David Hoyos, Diana Carolina Giraldo Soroush Parsa, Jagath Shanthalal, Herlin Rodolfo Espinosa, Carlos Navarro, Jorge Cardona and Beatriz Vanessa Herrera at CIAT, Tunrayo Alabi and Joseph Rusike from IITA, Guy Hareau, Reinhard Simon, Henry Juarez, Ulrich Kleinwechter, Greg Forbes, Adam Sparks from CIP, and David Brown and Charles Staver from Bioversity International.Please note these services may be unavailable at times due to maintenance work.Please feel free to contact us with any questions or problems you may be having with RTBMaps.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
ArcGIS Map Packages and GIS Data for Gillreath-Brown, Nagaoka, and Wolverton (2019)
**When using the GIS data included in these map packages, please cite all of the following:
Gillreath-Brown, Andrew, Lisa Nagaoka, and Steve Wolverton. A Geospatial Method for Estimating Soil Moisture Variability in Prehistoric Agricultural Landscapes, 2019. PLoSONE 14(8):e0220457. http://doi.org/10.1371/journal.pone.0220457
Gillreath-Brown, Andrew, Lisa Nagaoka, and Steve Wolverton. ArcGIS Map Packages for: A Geospatial Method for Estimating Soil Moisture Variability in Prehistoric Agricultural Landscapes, Gillreath-Brown et al., 2019. Version 1. Zenodo. https://doi.org/10.5281/zenodo.2572018
OVERVIEW OF CONTENTS
This repository contains map packages for Gillreath-Brown, Nagaoka, and Wolverton (2019), as well as the raw digital elevation model (DEM) and soils data, of which the analyses was based on. The map packages contain all GIS data associated with the analyses described and presented in the publication. The map packages were created in ArcGIS 10.2.2; however, the packages will work in recent versions of ArcGIS. (Note: I was able to open the packages in ArcGIS 10.6.1, when tested on February 17, 2019). The primary files contained in this repository are:
For additional information on contents of the map packages, please see see "Map Packages Descriptions" or open a map package in ArcGIS and go to "properties" or "map document properties."
LICENSES
Code: MIT year: 2019
Copyright holders: Andrew Gillreath-Brown, Lisa Nagaoka, and Steve Wolverton
CONTACT
Andrew Gillreath-Brown, PhD Candidate, RPA
Department of Anthropology, Washington State University
andrew.brown1234@gmail.com – Email
andrewgillreathbrown.wordpress.com – Web
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper performs, describes, and evaluates a comparison of seven software tools (ArcGIS Pro, GRASS GIS, SAGA GIS, CitySim, Ladybug, SimStadt and UMEP) to calculate solar irradiation. The analysis focuses on data requirements, software usability, and accuracy simulation output. The use case for the comparison is solar irradiation on building surfaces, in particular on roofs. The research involves collecting and preparing spatial and weather data. Two test areas - the Santana district in S ̃ao Paulo, Brazil, and the Heino rural area in Raalte, the Netherlands - were selected. In both cases, the study area encompasses the vicinity of a weather station. Therefore, the meteorological data from these stations serve as ground truth for the validation of the simulation results. We create several models (raster and vector) to meet the diverse input requirements. We present our findings and discuss the output from the software tools from both quantitative and qualitative points of view. Vector-based simulation models offer better results than raster-based ones. However, they have more complex data requirements. Future research will focus on evaluating the quality of the simulation results on vertical and tilted surfaces as well as the calculation of direct and diffuse solar irradiation values for vector-based methods.
The Digital Geomorphic-GIS Map of Gulf Islands National Seashore (5-meter accuracy and 1-foot resolution 2006-2007 mapping), Mississippi and Florida is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) a 10.1 file geodatabase (guis_geomorphology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro map file (.mapx) file (guis_geomorphology.mapx) and individual Pro layer (.lyrx) files (for each GIS data layer), as well as with a 2.) 10.1 ArcMap (.mxd) map document (guis_geomorphology.mxd) and individual 10.1 layer (.lyr) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI 10.1 shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) A GIS readme file (guis_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (guis_geomorphology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (guis_geomorphology_metadata_faq.pdf). Please read the guis_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri,htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (guis_geomorphology_metadata.txt or guis_geomorphology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:26,000 and United States National Map Accuracy Standards features are within (horizontally) 13.2 meters or 43.3 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset used for the research presented in the following paper: Takayuki Hiraoka, Takashi Kirimura, Naoya Fujiwara (2024) "Geospatial analysis of toponyms in geo-tagged social media posts".
We collected georeferenced Twitter posts tagged to coordinates inside the bounding box of Japan between 2012-2018. The present dataset represents the spatial distributions of all geotagged posts as well as posts containing in the text each of 24 domestic toponyms, 12 common nouns, and 6 foreign toponyms. The code used to analyze the data is available on GitHub.
preprocessed_mcntlt7_selected/
: Number of geotagged twitter posts in each grid cell. Each csv file under this directory associates each grid cell (spanning 30 seconds of latitude and 45 secoonds of longitude, which is approximately a 1km x 1km square, specified by an 8 digit code m3code
) with the number of geotagged tweets tagged to the coordinates inside that cell (tweetcount
). file_names.json
relates each of the toponyms studied in this work to the corresponding datafile (all
denotes the full data). Note that these data files are modified from the v1.0.0 to exclude posts that contain seven or more mentions.population/population_center_2020.xlsx
: Center of population of each municipality based on the 2020 census. Derived from data published by the Statistics Bureau of Japan on their website (Japanese)population/census2015mesh3_totalpop_setai_area.csv
: Resident population in each grid cell based on the 2015 census. Derived from data published by the Statistics Bureau of Japan on e-stat (Japanese)population/economiccensus2016mesh3_jigyosyo_jugyosya_area.csv
: Employed population in each grid cell based on the 2016 Economic Census. Derived from data published by the Statistics Bureau of Japan on e-stat (Japanese)japan_MetropolitanEmploymentArea2015map/
: Shape file for the boundaries of Metropolitan Employment Areas (MEA) in Japan. See this website for details of MEA.ward_shapefiles/
: Shape files for the boundaries of wards in large cities, published by the Statistics Bureau of Japan on e-statThe Namoi Impact and Risk Analysis Database (Analysis Database) is a fit-for-purpose geospatial information system developed for the Impact and Risk Analysis (Component 3-4) products of the Bioregional Assessment Technical Programme (BATP). The Analysis Database brings together many of the data sets of the scientific disciplines of the Programme and includes modelling results from hydrogeology and hydrology, landscape classes and economic, sociocultural and ecological assets. These data sets are listed in the Data Register for each subregion and can be found on the Bioregional Assessments web site (http://www.bioregionalassessments.gov.au/).
An Analysis Database of common design and schema was implemented for each individual subregion where a full Impact and Risk Analysis was completed. To populate each database, input datasets were transformed, normalised and inserted into their respective Analysis Database in accord with the common design and schema. The approach enabled the universal treatment of data analysis across all bioregions despite data being of a different specification and origin.
The Analysis Database provided for this subregion is an exact replica of the original used for the assessment of the subregion with the exception that a few spatial data for individual Assets subject to restrictions have been removed before publication. The restrictions are typically for threatened species spatial data but occasionally, restrictive licencing conditions imposed by some custodians prevented publication of some data. The database is constructed using the Open Source platform PostgreSQL coupled with PostGIS. This technology was considered to better enable the provenance and transparency requirements of the Programme. The files provided here have been prepared using the PostgreSQL version 9.5 SQL Dump function - pg_dump.
A detailed description of the Analysis Database, its design, structure and application is provided in the supporting documentation: http://data.bioregionalassessments.gov.au/dataset/05e851cf-57a5-4127-948a-1b41732d538c
The Namoi Impact and Risk Analysis Database (Analysis Database) is the geospatial database for completing the Impact and Risk Analysis component of a Bioregional Assessment. This includes the creating of results, tables and maps that appear in the relevant Products of each assessment. The database also manages the data used by the BA Explorer.
An individual instance of the Analysis Database was developed for each subregion where a component 3-4 Impact and Risks Assessment was conducted. With the exception of the subregion-specific data contained within it and the removal of restricted data records, each analysis database is of identical design and structure.
This Analysis Database is an instance of PostgreSQL version 9.5 hosted on Linux Red Hat Enterprise Linux version 4.8.5-4. PostgreSQL geospatial capabilities are provided by POSTGIS version 2.2.
Data pre-processing and upload into each PostgreSQL database was completed using FME Desktop (Oracle Edition) version 2016.1.2.1. Analysis data and results are provided to users and systems via the geospatial services of Geoserver version 2.9.1. Scientific analysis and mapping was undertaken by connecting a range of data using a combination of Microsoft Excel, QGIS and ArcMap systems.
During the Programme and for its working life, the Analysis Database was hosted and managed on instances of Amazon Web Services managed by Geoscience Australia and the Bureau of Meteorology.
Bioregional Assessment Programme (2018) NAM Impact and Risk Analysis Database v01. Bioregional Assessment Derived Dataset. Viewed 11 December 2018, http://data.bioregionalassessments.gov.au/dataset/1549c88d-927b-4cb5-b531-1d584d59be58.
Derived From River Styles Spatial Layer for New South Wales
Derived From Geofabric Surface Network - V2.1
Derived From Surface Geology of Australia, 1:1 000 000 scale, 2012 edition
Derived From HUN SW footprint shapefiles v01
Derived From HUN Groundwater footprint polygons v01
Derived From Namoi Environmental Impact Statements - Mine footprints
Derived From Namoi CMA Groundwater Dependent Ecosystems
Derived From Landscape classification of the Namoi preliminary assessment extent
Derived From Environmental Asset Database - Commonwealth Environmental Water Office
Derived From Soil and Landscape Grid National Soil Attribute Maps - Clay 3 resolution - Release 1
Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb)
Derived From Bioregional_Assessment_Programme_Catchment Scale Land Use of Australia - 2014
Derived From Interim Biogeographic Regionalisation for Australia (IBRA), Version 7 (Regions)
Derived From Key Environmental Assets - KEA - of the Murray Darling Basin
Derived From Bioregional Assessment areas v03
Derived From GIS analysis of HYDMEAS - Hydstra Groundwater Measurement Update: NSW Office of Water - Nov2013
Derived From BA ALL Assessment Units 1000m 'super set' 20160516_v01
Derived From Mean Annual Climate Data of Australia 1981 to 2012
Derived From Asset list for Namoi - CURRENT
Derived From Bioregional Assessment areas v01
Derived From Bioregional Assessment areas v02
Derived From Namoi bore locations, depth to water for June 2012
Derived From Victoria - Seamless Geology 2014
Derived From Murray-Darling Basin Aquatic Ecosystem Classification
Derived From HUN SW GW Mine Footprints for IMIA 20170303 v03
Derived From Climate model 0.05x0.05 cells and cell centroids
Derived From Namoi hydraulic conductivity measurements
Derived From Namoi groundwater uncertainty analysis
Derived From Historical Mining footprints DTIRIS HUN 20150707
Derived From Namoi NGIS Bore analysis for 2012
Derived From Australian 0.05º gridded chloride deposition v2
Derived From Communities of National Environmental Significance Database - RESTRICTED - Metadata only
Derived From Bioregional Assessment areas v06
Derived From NAM Analysis Boundaries 20160908 v01
Derived From Namoi groundwater drawdown grids
Derived From National Groundwater Dependent Ecosystems (GDE) Atlas (including WA)
Derived From NSW Catchment Management Authority Boundaries 20130917
Derived From BOM, Australian Average Rainfall Data from 1961 to 1990
Derived From Namoi Existing Mine Development Surface Water Footprints
Derived From Surface water Preliminary Assessment Extent (PAE) for the Namoi (NAM) subregion - v03
Derived From BILO Gridded Climate Data: Daily Climate Data for each year from 1900 to 2012
Derived From [National Surface Water sites
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Delta In-Channel Islands’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/5cd67f4f-ca7d-44b0-881c-741b0d70381a on 26 January 2022.
--- Dataset description provided by original source is as follows ---
Data contains historical polygons of in-channel islands within the Sacramento San Joaquin Delta. Data consists of merged datasets from 1929, 1940, 1949, 1952, 1995, 2002, and 2017. The 2017 polygons are digitized from the 2017 Delta LiDAR imagery by the Division of Engineering, Geomatics Branch, Geospatial Data Support Section. The older pre-2017 polygons were all digitized by staff in the Delta Levees Program. Data can be queried for a single year or date range using the 'Year' field. Historical data was compiled and merged from datasets provided by the Delta Levees program. Data coverage differs between years. Absences or gaps in historical data may occur. Older acquisitions generally have a smaller footprint than recent imagery acquisitions. The 2017 in-channel islands cover the Legal Delta, and also include Chipps Island. The associated data are considered DWR enterprise GIS data, which meet all appropriate requirements of the DWR Spatial Data Standards, specifically the DWR Spatial Data Standard version 3.1, dated September 11, 2019. DWR makes no warranties or guarantees — either expressed or implied — as to the completeness, accuracy, or correctness of the data. DWR neither accepts nor assumes liability arising from or for any incorrect, incomplete, or misleading subject data. Comments, problems, improvements, updates, or suggestions should be forwarded to the official GIS steward as available and appropriate at gis@water.ca.gov.
--- Original source retains full ownership of the source dataset ---
Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
GRASS GIS database for geospatial mapping and analysis of physiologically based demographic modeling (PBDM) implemented by the Center for the Analysis of Sustainable Agricultural Systems (CASAS, www.casasglobal.org).
The casas_gis_grass8data.zip
archive includes data updated for use with GRASS GIS version 8.
The Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and
This data release contains the analytical results and evaluated source data files of geospatial analyses for identifying areas in Alaska that may be prospective for different types of lode gold deposits, including orogenic, reduced-intrusion-related, epithermal, and gold-bearing porphyry. The spatial analysis is based on queries of statewide source datasets of aeromagnetic surveys, Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. LodeGold_Results_gdb.zip - The analytical results in geodatabase polygon feature classes which contain the scores for each source dataset layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for a deposit type within the HUC. The data is described by FGDC metadata. An mxd file, and cartographic feature classes are provided for display of the results in ArcMap. An included README file describes the complete contents of the zip file. 2. LodeGold_Results_shape.zip - Copies of the results from the geodatabase are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file. 3. LodeGold_SourceData_gdb.zip - The source datasets in geodatabase and geotiff format. Data layers include aeromagnetic surveys, AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds. The data is described by FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are the python scripts used to perform the analyses. Users may modify the scripts to design their own analyses. The included README files describe the complete contents of the zip file and explain the usage of the scripts. 4. LodeGold_SourceData_shape.zip - Copies of the geodatabase source dataset derivatives from ARDF and lithology from SIM3340 created for this analysis are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file.