Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The GDAL/OGR libraries are open-source, geo-spatial libraries that work with a wide range of raster and vector data sources. One of many impressive features of the GDAL/OGR libraries is the ViRTual (VRT) format. It is an XML format description of how to transform raster or vector data sources on the fly into a new dataset. The transformations include: mosaicking, re-projection, look-up table (raster), change data type (raster), and SQL SELECT command (vector). VRTs can be used by GDAL/OGR functions and utilities as if they were an original source, even allowing for chaining of functionality, for example: have a VRT mosaic hundreds of VRTs that use look-up tables to transform original GeoTiff files. We used the VRT format for the presentation of hydrologic model results, allowing for thousands of small VRT files representing all components of the monthly water balance to be transformations of a single land cover GeoTiff file.
Presentation at 2018 AWRA Spring Specialty Conference: Geographic Information Systems (GIS) and Water Resources X, Orlando, Florida, April 23-25, http://awra.org/meetings/Orlando2018/
Facebook
TwitterLearn Geographic Mapping with Altair, Vega-Lite and Vega using Curated Datasets
Complete geographic and geophysical data collection for mapping and visualization. This consolidation includes 18 complementary datasets used by 31+ Vega, Vega-Lite, and Altair examples 📊. Perfect for learning geographic visualization techniques including projections, choropleths, point maps, vector fields, and interactive displays.
Source data lives on GitHub and can also be accessed via CDN. The vega-datasets project serves as a common repository for example datasets used across these visualization libraries and related projects.
airports.csv), lines (like londonTubeLines.json), and polygons (like us-10m.json).windvectors.csv, annual-precip.json).This pack includes 18 datasets covering base maps, reference points, statistical data for choropleths, and geophysical data.
| Dataset | File | Size | Format | License | Description | Key Fields / Join Info |
|---|---|---|---|---|---|---|
| US Map (1:10m) | us-10m.json | 627 KB | TopoJSON | CC-BY-4.0 | US state and county boundaries. Contains states and counties objects. Ideal for choropleths. | id (FIPS code) property on geometries |
| World Map (1:110m) | world-110m.json | 117 KB | TopoJSON | CC-BY-4.0 | World country boundaries. Contains countries object. Suitable for world-scale viz. | id property on geometries |
| London Boroughs | londonBoroughs.json | 14 KB | TopoJSON | CC-BY-4.0 | London borough boundaries. | properties.BOROUGHN (name) |
| London Centroids | londonCentroids.json | 2 KB | GeoJSON | CC-BY-4.0 | Center points for London boroughs. | properties.id, properties.name |
| London Tube Lines | londonTubeLines.json | 78 KB | GeoJSON | CC-BY-4.0 | London Underground network lines. | properties.name, properties.color |
| Dataset | File | Size | Format | License | Description | Key Fields / Join Info |
|---|---|---|---|---|---|---|
| US Airports | airports.csv | 205 KB | CSV | Public Domain | US airports with codes and coordinates. | iata, state, `l... |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This feature contains vector lines representing the shoreline and coastal habitats of California. Line segments are classified according to the Environmental Sensitivity Index (ESI) classification system and are a compilation of the ESI data from the most recent ESI atlas publications. The ESI data includes information for three main components: shoreline habitats, sensitive biological resources, and human-use resources. This California dataset contains only the ESI shoreline data layer and is a merged set of individual ESI data sets to cover the entire California coast. For many parts of the California shoreline, the NOAA-ESI database lists several shoreline types present at a given location, described from landward to seaward. A simplified singular classification [Map_Class] was created to generalize the most dominant features of the multiple shore type attributes present in the raw data. More information can be found at the source citation at ESI Guidelines | response.restoration.noaa.gov Attributes: Line: Type of geographic feature (H: Hydrography, P: Pier, S: Shoreline) Most_sensitive: If multiple shoreline types appear in ESI classification, this field represents the highest value (most sensitive type); otherwise it is the same value as the ESI field. Shore_code: The ESI shoreline type. In many cases shorelines are ranked with multiple codes, such as "6B/3A" (listed landward to seaward). Source: Original year of ESI data. Esi_description: Concatenation of shore type descriptions (listed landward to seaward) Shoretype_1: Numeric classification for the first (most landward) ESI type. Shoretype_1_name: Physical description for the first ESI type. Shoretype_2: Numeric classification for the second ESI type. Shoretype_2_name: Physical description for the second ESI type Shoretype_3: Numeric classification for the third (most seaward) ESI type. Shoretype_3_name: Physical description for the third ESI type. Map_class: Generalized ESI shoreline type for simplified sym
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In post-tonal theory, set classes are normally elements of Z12 and are characterized by their interval-class vector. Those being non-inversionally-symmetrical can be split into two set types related by inversion, which can be characterized by their trichord-type vector. In this paper, I consider the general case of set classes and types in Zn and their m-class and m-type vectors, m ranging from 0 to n, which are properly grouped into matrices. As well, three relevant cases are considered: Z6 (hexachords), Z7 (heptatonic scales), and Z12 (chromatic scale), where all those type and class matrices are computed and provided in supplementary files; and, in the first two cases, also in the form of tables. This completes the corresponding information given in previous publications on this subject and can directly be used by researchers and composers. Moreover, two computer programs, written in MATLAB, are provided for obtaining the above-mentioned and other related matrices in the general case of Zn. Additionally, several theorems on type and class matrices are provided, including a complete version of the hexachord theorem. These theorems allow us to obtain the type and class matrices by different procedures, thus providing a broader perspective and better understanding of the theory.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Microsoft released a U.S.-wide vector building dataset in 2018. Although the vector building layers provide relatively accurate geometries, their use in large-extent geospatial analysis comes at a high computational cost. We used High-Performance Computing (HPC) to develop an algorithm that calculates six summary values for each cell in a raster representation of each U.S. state, excluding Alaska and Hawaii: (1) total footprint coverage, (2) number of unique buildings intersecting each cell, (3) number of building centroids falling inside each cell, and area of the (4) average, (5) smallest, and (6) largest area of buildings that intersect each cell. These values are represented as raster layers with 30 m cell size covering the 48 conterminous states. We also identify errors in the original building dataset. We evaluate precision and recall in the data for three large U.S. urban areas. Precision is high and comparable to results reported by Microsoft while recall is high for buildings with footprints larger than 200 m2 but lower for progressively smaller buildings.
Building footprints are a critical environmental descriptor. Microsoft produced a U.S.-wide vector building dataset in 20181 that was generated from aerial images available to Bing Maps using deep learning methods for object classification2. The main goal of this product has been to increase the coverage of building footprints available for OpenStreetMap. Microsoft identified building footprints in two phases; first, using semantic segmentation to identify building pixels from aerial imagery using Deep Neural Networks and second, converting building pixel blobs into polygons. The final dataset includes 125,192,184 building footprint polygon geometries in GeoJSON vector format, covering all 50 U.S. States, with data for each state distributed separately. These data have 99.3% precision and 93.5% pixel recall accuracy2. Temporal resolution of the data (i.e., years of the aerial imagery used to derive the data) are not provided by Microsoft in the metadata.
Using vector layers for large-extent (i.e., national or state-level) spatial analysis and modelling (e.g., mapping the Wildland-Urban Interface, flood and coastal hazards, or large-extent urban typology modelling) is challenging in practice. Although vector data provide accurate geometries, incorporating them in large-extent spatial analysis comes at a high computational cost. We used High Performance Computing (HPC) to develop an algorithm that calculates six summary statistics (described below) for buildings at 30-m cell size in the 48 conterminous U.S. states, to better support national-scale and multi-state modelling that requires building footprint data. To develop these six derived products from the Microsoft buildings dataset, we created an algorithm that took every single building and built a small meshgrid (a 2D array) for the bounding box of the building and calculated unique values for each cell of the meshgrid. This grid structure is aligned with National Land Cover Database (NLCD) products (projected using Albers Equal Area Conic system), enabling researchers to combine or compare our products with standard national-scale datasets such as land cover, tree canopy cover, and urban imperviousness3.
Locations, shapes, and distribution patterns of structures in urban and rural areas are the subject of many studies. Buildings represent the density of built up areas as an indicator of urban morphology or spatial structures of cities and metropolitan areas4,5. In local studies, the use of vector data types is easier6,7. However, in regional and national studies a raster dataset would be more preferable. For example in measuring the spatial structure of metropolitan areas a rasterized building layer would be more useful than the original vector datasets8.
Our output raster products are: (1) total building footprint coverage per cell (m2 of building footprint per 900 m2 cell); (2) number of buildings that intersect each cell; (3) number of building centroids falling within each cell; (4) area of the largest building intersecting each cell (m2); (5) area of the smallest building intersecting each cell (m2); and (6) average area of all buildings intersecting each cell (m2). The last three area metrics include building area that falls outside the cell but where part of the building intersects the cell (Fig. 1). These values can be used to describe the intensity and typology of the built environment.
Our software is available through U.S. Geological Survey code r...
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global vector indexing engine market size reached USD 1.82 billion in 2024, with a robust year-on-year growth trajectory. The market is poised to expand at a CAGR of 24.7% from 2025 to 2033, propelling the total market value to an estimated USD 14.13 billion by 2033. This extraordinary momentum is primarily attributed to the surging demand for high-performance data retrieval solutions across various industries, as enterprises increasingly rely on artificial intelligence (AI) and machine learning (ML) applications that require efficient handling of high-dimensional data vectors.
A key growth factor for the vector indexing engine market is the exponential rise in unstructured data generated by digital transformation initiatives worldwide. Organizations are rapidly adopting AI-driven solutions for tasks such as semantic search, personalized recommendations, and advanced analytics, all of which necessitate efficient vector indexing capabilities. The proliferation of IoT devices, social media platforms, and enterprise applications has resulted in massive repositories of complex data types, driving the need for scalable and high-speed vector search engines. These engines enable businesses to extract actionable insights from vast datasets, fueling innovation and competitive differentiation across sectors such as BFSI, healthcare, retail, and telecommunications.
Another significant driver is the technological advancements in vector indexing algorithms and hardware acceleration. The integration of GPU and FPGA-based architectures has dramatically improved the performance and scalability of vector indexing engines, allowing for real-time processing of billions of vectors. Innovations such as approximate nearest neighbor (ANN) search, hierarchical navigable small world (HNSW) graphs, and product quantization are enabling faster and more accurate data retrieval. These advancements are crucial for powering next-generation applications like generative AI, autonomous systems, and fraud detection, thereby expanding the addressable market and enhancing the value proposition for end-users.
The increasing adoption of cloud-based deployment models also acts as a pivotal growth catalyst for the vector indexing engine market. Cloud platforms offer unparalleled scalability, flexibility, and cost-efficiency, enabling organizations to deploy vector search solutions without the need for significant upfront investments in hardware infrastructure. Major cloud service providers are integrating vector indexing engines into their AI and analytics offerings, making it easier for enterprises to leverage these capabilities as part of broader digital transformation strategies. This shift towards cloud-native architectures is expected to accelerate further as businesses prioritize agility and remote accessibility in a post-pandemic world.
From a regional perspective, North America continues to dominate the vector indexing engine market, accounting for the largest revenue share in 2024. This leadership is underpinned by the region’s advanced technology ecosystem, high R&D investments, and early adoption of AI/ML solutions across key industries. However, Asia Pacific is emerging as the fastest-growing region, driven by rapid digitalization, expanding e-commerce sectors, and increasing government initiatives to foster AI innovation. Europe also demonstrates strong growth potential, particularly in sectors such as healthcare and manufacturing, where data-driven decision-making is becoming increasingly critical. As global enterprises seek to harness the power of vector indexing for competitive advantage, the market is set for sustained expansion across all major regions.
The vector indexing engine market is segmented by component into software, hardware, and services, each playing a distinct role in the overall ecosystem. Software solutions represent the core of the market, providing the algorithms and frameworks necessary for efficient vector indexing, search, and retrieval. These solutions are continually evolving, with vendors focusing on enhancing scalability, reducing latency, and supporting integration with popular AI/ML toolchains. The software segment is highly competitive, with both open-source and commercial offerings catering to a diverse range of use cases, from enterprise search to recommendation engines. The g
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Wadi Hasa Sample Dataset — GRASS GIS Location
Version 1.0 (2025-09-19)
Overview
--------
This archive contains a complete GRASS GIS *Location* for the Wadi Hasa region (Jordan), including base data and exemplar analyses used in the Geomorphometry chapter. It is intended for teaching and reproducible research in archaeological GIS.
How to use
----------
1) Unzip the archive into your GRASSDATA directory (or a working folder) and add the Location to your GRASS session.
2) Start GRASS and open the included workspace (Workspace.gxw) or choose a Mapset to work in.
3) Set the computational region to the default extent/resolution for reproducibility:
g.region n=3444220 s=3405490 e=796210 w=733450 nsres=30 ewres=30 -p
4) Inspect layers as needed:
g.list type=rast,vector
r.info
Citation & License
------------------
Please cite this dataset as:
Isaac I. Ullah. 2025. *Wadi Hasa Sample Dataset (GRASS GIS Location)*. Zenodo. https://doi.org/10.5281/zenodo.17162040
All contents are released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. The original Wadi Hasa survey dataset is available at: https://figshare.com/articles/dataset/Wadi_Hasa_Ancient_Pastoralism_Project/1404216 The original Wadi Hasa survey dataset is available at: https://figshare.com/articles/dataset/Wadi_Hasa_Ancient_Pastoralism_Project/1404216
Coordinate Reference System
---------------------------
- Projection: UTM, Zone 36N
- Datum/Ellipsoid: WGS84
- Units: meter
- Coordinate system and units are defined in the GRASS Location (PROJ_INFO/UNITS).
Default Region (computational extent & resolution)
--------------------------------------------------
- North: 3444220
- South: 3405490
- East: 796210
- West: 733450
- Resolution: 30 (NS), 30 (EW)
- Rows x Cols: 1291 x 2092 (cells: 2700772)
Directory / Mapset Structure
----------------------------
This Location contains the following Mapsets (data subprojects), each with its own raster/vector layers and attribute tables (SQLite):
- Boolean_Predictive_Modeling: 8 raster(s), 4 vector(s)
- ISRIC_soilgrid: 31 raster(s), 0 vector(s)
- Landsat_Imagery: 3 raster(s), 0 vector(s)
- Landscape_Evolution_Modeling: 41 raster(s), 0 vector(s)
- Least_Cost_Analysis: 13 raster(s), 4 vector(s)
- Machine_Learning_Predictive_Modeling: 70 raster(s), 11 vector(s)
- PERMANENT: 4 raster(s), 2 vector(s)
- Sentinel2_Imagery: 4 raster(s), 0 vector(s)
- Site_Buffer_Analysis: 0 raster(s), 2 vector(s)
- Terrain_Analysis: 27 raster(s), 2 vector(s)
- Territory_Modeling: 14 raster(s), 2 vector(s)
- Trace21k_Paleoclimate_Downscale_Example: 4 raster(s), 2 vector(s)
- Visibility_Analysis: 11 raster(s), 5 vector(s)
Data Content (summary)
----------------------
- Total raster maps: 230
- Total vector maps: 34
Raster resolutions present:
- 10 m: 13 raster(s)
- 30 m: 183 raster(s)
- 208.01 m: 2 raster(s)
- 232.42 m: 30 raster(s)
- 1000 m: 2 raster(s)
Major content themes include:
- Base elevation surfaces and terrain derivatives (e.g., DEMs, slope, aspect, curvature, flow accumulation, prominence).
- Hydrology, watershed, and stream-related layers.
- Visibility analyses (viewsheds; cumulative viewshed analyses for Nabataean and Roman towers).
- Movement and cost-surface analyses (isotropic/anisotropic costs, least-cost paths, time-to-travel surfaces).
- Predictive modeling outputs (boolean/inductive/deductive; regression/classification surfaces; training/test rasters).
- Satellite imagery products (Landsat NIR/RED/NDVI; Sentinel‑2 bands and RGB composite).
- Soil and surficial properties (ISRIC SoilGrids 250 m products).
- Paleoclimate downscaling examples (CHELSA TraCE21k MAT/AP).
Vectors include:
- Archaeological point datasets (e.g., WHS_sites, WHNBS_sites, Nabatean_Towers, Roman_Towers).
- Derived training/testing samples and buffer polygons for modeling.
- Stream network and paths from least-cost analyses.
Important notes & caveats
-------------------------
- Mixed resolutions: Analyses span 10 m (e.g., Sentinel‑2 composites, some derived surfaces), 30 m (majority of terrain and modeling rasters), ~232 m (SoilGrids products), and 1 km (CHELSA paleoclimate). Set the computational region appropriately (g.region) before processing or visualization.
- NoData handling: The raw SRTM import (Hasa_30m_SRTM) reports extreme min/max values caused by nodata placeholders. Use the clipped/processed DEMs (e.g., Hasa_30m_clipped_wshed*) and/or set nodata with r.null as needed.
- Masks: MASK rasters are provided for analysis subdomains where relevant.
- Attribute tables: Vector attribute data are stored in per‑Mapset SQLite databases (sqlite/sqlite.db) and connected via layer=1.
Provenance (brief)
------------------
- Primary survey points and site datasets derive from the Wadi Hasa projects (see Figshare record above).
- Base elevation and terrain derivatives are built from SRTM and subsequently processed/clipped for the watershed.
- Soil variables originate from ISRIC SoilGrids (~250 m).
- Paleoclimate examples use CHELSA TraCE21k surfaces (1 km) that are interpolated to higher resolutions for demonstration.
- Satellite imagery layers are derived from Landsat and Sentinel‑2 scenes.
Reproducibility & quick commands
--------------------------------
- Restore default region: g.region n=3444220 s=3405490 e=796210 w=733450 nsres=30 ewres=30 -p
- Set region to a raster: g.region raster=
Change log
----------
- v1.0: Initial public release of the teaching Location on Zenodo (CC BY 4.0).
Contact
-------
For questions, corrections, or suggestions, please contact Isaac I. Ullah
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This geomorphic reference data is intended for use as a calibration and/or validation data for global scale remote sensing image based classification of coral reef areas. The data is in a vector shapefile format consisting of polygons labelled into one of thirteen coral reef geomorphic classes that include Patch Reef, Plateau, No Reef, Outer Reef Flat, Deep Water, Inner Reef Flat, Back Reef Slope, Deep Lagoon, Terrestrial Reef Flat, Shallow Lagoon, Sheltered Reef Slope, Reef Crest and Reef Slope.
Please cite data as:
Borrego-Acevedo, R., Wolff, J., Canto, R., Harris, D., Kennedy, E., Kovacs , Lyons, M., E., Markey, K., Murray, N., Ordonez Alvarez, A., Phinn, S., Roe, M., Roelfsema, C., Say, C., Tudman, P., Yuwono, D., Bambric, B., Fox, H., Lieb, Z., Asner , G., Knapp , D., Li, J., Harris, B., Larsen, K. & Rice, K. 2020. South Asia Geomorphic Type Reference Sample. University of Queensland.
DOI: https://doi.org/10.6084/m9.figshare.13619489
Links to supplementary information:
Mapping methods overview (https://allencoralatlas.org/methods/)
UQ mapping team web site (https://www.rsrc.org.au/globalreef)
Detailed mapping methods description by Lyons et al. (https://doi.org/10.1002/rse2.157)
Detailed classification scheme description by Kennedy et al. (https://doi.org/10.1038/s41597-021-00958-z)
Detailed reference creation description workflow by Roelfsema et al. (https://doi.org/10.3389/fmars.2021.643381)
Funding:
Vulcan Inc., (https://vulcan.com)
Great Barrier Reef Foundation (https://www.barrierreef.org/)
Acknowledgement:
This work was initiated and funded primarily through Paul Allen Philanthropies and Vulcan Inc. as part of the Allen Coral Atlas. We acknowledge the late Paul Allen and Ruth Gates for their fundamental vision and drive to enable us to work together on this critical reef mapping problem. Project partners providing financial, service and personnel include: Planet Inc., National Geographic, University of Queensland, Arizona State University, and University of Hawai'i. Significant support has also been provided by Google Inc., Great Barrier Reef Foundation, and Trimble (Ecognition). Contributors to establishing and running the project include: Vulcan Inc. [James Deutsch, Lauren Kickam, Paulina Gerstner, Charlie Whiton, Kirk Larsen, Sarah Frias Torres, Kyle Rice, Eldan Goldenberg, Janet Greenlee]; Planet Inc. [Andrew Zolli, Trevor McDonald, Joe Mascaro, Joe Kington]; University of Queensland [Chris Roelfsema, Stuart Phinn, Emma Kennedy, Mitch Lyons, Nicholas Murray, Doddy Yuwono, Dan Harris, Eva Kovacs, Rodney Borrego, Meredith Roe, Jeremy Wolff, Kathryn Markey, Alexandra Ordonez, Chantal Say, Paul Tudman]; Arizona State University [Greg Asner, Dave Knapp, Jiwei Li, Yaping Xu, Nick Fabina, Heather D'Angelo]; and National Geographic [Helen Fox, Brianna Bambic, Brian Free, Zoe Lieb] and Great Barrier Reef Foundation [Petra Lundgren, Kirsty Bevan, Sarah Castine].
Attributions of contributed field data or maps for the reference sample creation are available at: https://allencoralatlas.org/attribution/
Facebook
TwitterLand use data is critically important to the work of the Department of Water Resources (DWR) and other California agencies. Understanding the impacts of land use, crop location, acreage, and management practices on environmental attributes and resource management is an integral step in the ability of Groundwater Sustainability Agencies (GSAs) to produce Groundwater Sustainability Plans (GSPs) and implement projects to attain sustainability. Land IQ was contracted by DWR to develop a comprehensive and accurate spatial land use database for the 2021 water year (WY 2021), covering over 10.7 million acres of agriculture on a field scale and additional areas of urban extent.The primary objective of this effort was to produce a spatial land use database with an accuracy exceeding 95% using remote sensing, statistical, and temporal analysis methods. This project is an extension of the land use mapping which began in the 2014 crop year, which classified over 15 million acres of land into agricultural and urban areas. Unlike the 2014 and 2016 datasets, the annual WY datasets from and including 2018, 2019, 2020, and 2021 include multi-cropping.Land IQ integrated crop production knowledge with detailed ground truth information and multiple satellite and aerial image resources to conduct remote sensing land use analysis at the field scale. Individual fields (boundaries of homogeneous crop types representing true cropped area, rather than legal parcel boundaries) were classified using a crop category legend and a more specific crop type legend. A supervised classification process using a random forest approach was used to classify delineated fields and was carried out county by county where training samples were available. Random forest approaches are currently some of the highest performing methods for data classification and regression. To determine frequency and seasonality of multicropped fields, peak growth dates were determined for each field of annual crops. Fields were attributed with DWR crop categories, which included citrus/subtropical, deciduous fruits and nuts, field crops, grain and hay, idle, pasture, rice, truck crops, urban, vineyards, and young perennials. These categories represent aggregated groups of specific crop types in the Land IQ dataset.Accuracy was calculated for the crop mapping using both DWR and Land IQ crop legends. The overall accuracy result for the crop mapping statewide was 97% using the Land IQ legend (Land IQ Subclass) and 98% using the DWR legend (DWR Class). Accuracy and error results varied among crop types. Some less extensive crops that have very few validation samples may have a skewed accuracy result depending on the number and nature of validation sample points. DWR revised crops and conditions from the Land IQ classification were encoded using standard DWR land use codes added to feature attributes, and each modified classification is indicated by the value 'r' in the ‘DWR_REVISE' data field. Polygons drawn by DWR, not included in Land IQ dataset receive the 'n' code for new. Boundary change (i.e. DWR changed the boundary that LIQ delivered, could be split boundary) indicated by 'b'. Each polygon classification is consistent with DWR attribute standards, however some of DWR's traditional attribute definitions are modified and extended to accommodate unavoidable constraints within remote-sensing classifications, or to make data more specific for DWR's water balance computation needs. The original Land IQ classifications reported for each polygon are preserved for comparison, and are also expressed as DWR standard attributes. Comments, problems, improvements, updates, or suggestions about local conditions or revisions in the final data set should be forwarded to the appropriate Regional Office Senior Land Use Supervisor.Revisions were made if:- DWR corrected the original crop classification based on local knowledge and analysis,-PARTIALLY IRRIGATED CROPS Crops, irrigated for only part of their normal irrigation season were given the special condition of ‘X’,-In certain areas, DWR changed the irrigation status to non-irrigated. Among those areas the special condition may have been changed to 'Partially Irrigated' based on image analysis and local knowledge,- young versus mature stages of perennial orchards and vineyards were identified (DWR added ‘Young’ to Special Condition attributes),- DWR determined that a field originally classified ‘Idle’ or 'Unclassified' were actually cropped one or more times during the year,- the percent of cropped area was changed from the original acres reported by Land IQ (values indicated in DWR ‘Percent’ column),- DWR determined that the field boundary should have been changed to better reflect the cropped area of the polygon and is identified by a 'b' in the DWR_REVISED column,- DWR determined that the field boundary should have been split to better reflect separate crops within the same polygon and identified by a 'b' in the DWR_REVISED column,- The ‘Mixed’ was added to the MULTIUSE column refers to no boundary change, but percent of field is changed where more than one crop is found,- DWR identified a distinct early or late crop on the field before the main season crop (‘Double’ was added to the MULTIUSE column); if the 1st and 2nd sequential crops occupied different portions of the total field acreage, the area percentages were indicated for each crop).This dataset includes multicropped fields. If the field was determined to have more than one crop during the course of the WY (Water Year begins October 1 and ends September 30 of the following year), the order of the crops is sequential, beginning with Class 1. All single cropped fields will be placed in Class 2, so every polygon will have a crop in the Class 2 and CropType2 columns. In the case that a permanent crop was removed during the WY, the Class 2 crop will be the permanent crop followed by ‘X’ – Unclassified fallow in the Class 3 column. In the case of Intercropping, the main crop will be placed in the Class 2 column with the partial crop in the Class 3 column.A new column for the 2019, 2020, and 2021 datasets is called ‘MAIN_CROP’. This column indicates which field Land IQ identified as the main season crop for the WY representing the crop grown during the dominant growing season for each county. The column ‘MAIN_CROP_DATE’, another addition to the 2019, 2020, and 2021 datasets, indicates the Normalized Difference Vegetation Index (NDVI) peak date for this main season crop. The column 'EMRG_CROP' for 2019, 2020, and 2021 indicates an emerging crop at the end of the WY. Crops listed indicate that at the end of the WY, September 2021, crop activity was detected from a crop that reached peak NDVI in the following WY (2022 WY). This attribute is included to account for water use of crops that span multiple WYs and are not exclusive to a single WY. It is indicative of early crop growth and initial water use in the current WY, but a majority of crop development and water use in the following WY. Crops listed in the ‘EMRG_CROP’ attribute will also be captured as the first crop (not necessarily Crop 1) in the following WY (2022 WY). These crops are not included in the 2021 UCF_ATT code as their peak date occurred in the following WY.For the 2021 dataset new columns added are: 'YR_PLANTED' which represent the year orchard / grove was planted. 'SEN_CROP' indicates a senescing crop at the beginning of the WY. Crops listed indicate that at the beginning of the WY, October 2020, crop activity was detected from a crop that reached peak NDVI in the previous WY (2020 WY), thus was a senescing crop. This is included to account for water use of crop growth periods that span multiple WYs and are not exclusive to a WY. Crops listed in the ‘SEN_CROP’ attribute are also captured in the CROPTYP 1 through 4 sequence of the previous WY (2020 WY). These crops are not included in the 2021 UCF_ATT code as their peak NDVI occurred in the previous WY. CTYP#_NOTE: indicates a more specific land use subclassification from the DWR Standard Land Use Legend that is not included in the primary, DWR Remote Sensing Land Use Legend.DWR reviewed and revised the data in some cases. The associated data are considered DWR enterprise GIS data, which meet all appropriate requirements of the DWR Spatial Data Standards, specifically the DWR Spatial Data Standard version 3.6, dated September 27, 2023. This data set was not produced by DWR. Data were originally developed and supplied by Land IQ, LLC, under contract to California Department of Water Resources. DWR makes no warranties or guarantees - either expressed or implied - as to the completeness, accuracy, or correctness of the data. DWR neither accepts nor assumes liability arising from or for any incorrect, incomplete, or misleading subject data. Detailed compilation and reviews of Statewide Crop Mapping and metadata development were performed by DWR Land Use Unit staff, therefore you may forward your questions to Landuse@water.ca.gov.This dataset is current as of 2021.
Facebook
Twitterdescription: The Geopspatial Fabric provides a consistent, documented, and topologically connected set of spatial features that create an abstracted stream/basin network of features useful for hydrologic modeling.The GIS vector features contained in this Geospatial Fabric (GF) data set cover the lower 48 U.S. states, Hawaii, and Puerto Rico. Four GIS feature classes are provided for each Region: 1) the Region outline ("one"), 2) Points of Interest ("POIs"), 3) a routing network ("nsegment"), and 4) Hydrologic Response Units ("nhru"). A graphic showing the boundaries for all Regions is provided at http://dx.doi.org/doi:10.5066/F7542KMD. These Regions are identical to those used to organize the NHDPlus v.1 dataset (US EPA and US Geological Survey, 2005). Although the GF Feature data set has been derived from NHDPlus v.1, it is an entirely new data set that has been designed to generically support regional and national scale applications of hydrologic models. Definition of each type of feature class and its derivation is provided within the
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Framed vector type data describing the right-of-way to the ground. Example(s) of data reuse(s): — Mapping on http://demo.3liz.fr/opendataparis
Facebook
TwitterThis is a vector tile service with labels for the fine scale vegetation and habitat map, to be used in web maps and GIS software packages. Labels appear at scales greater than 1:10,000 and characterize stand height, stand canopy cover, stand map class, and stand impervious cover. This service is mean to be used in conjunction with the vector tile services of the polygon themselves (either the solid symbology service or the hollow symbology service). The key to the labels appears in the graphic below; the key to map class abbreviations can be found here. The Sonoma County fine scale vegetation and habitat map is an 82-class vegetation map of Sonoma County with 212,391 polygons. The fine scale vegetation and habitat map represents the state of the landscape in 2013 and adheres to the National Vegetation Classification System (NVC). The map was designed to be used at scales of 1:5,000 and smaller. The full datasheet for this product is available here: https://sonomaopenspace.egnyte.com/dl/qOm3JEb3tD The final report for the fine scale vegetation map, containing methods and an accuracy assessment, is available here: https://sonomaopenspace.egnyte.com/dl/1SWyCSirE9Class definitions, as well as a dichotomous key for the map classes, can be found in the Sonoma Vegetation and Habitat Map Key (https://sonomaopenspace.egnyte.com/dl/xObbaG6lF8)The fine scale vegetation and habitat map was created using semi-automated methods that include field work, computer-based machine learning, and manual aerial photo interpretation. The vegetation and habitat map was developed by first creating a lifeform map, an 18-class map that served as a foundation for the fine-scale map. The lifeform map was created using “expert systems” rulesets in Trimble Ecognition. These rulesets combine automated image segmentation (stand delineation) with object based image classification techniques. In contrast with machine learning approaches, expert systems rulesets are developed heuristically based on the knowledge of experienced image analysts. Key data sets used in the expert systems rulesets for lifeform included: orthophotography (’11 and ’13), the LiDAR derived Canopy Height Model (CHM), and other LiDAR derived landscape metrics. After it was produced using Ecognition, the preliminary lifeform map product was manually edited by photo interpreters. Manual editing corrected errors where the automated methods produced incorrect results. Edits were made to correct two types of errors: 1) unsatisfactory polygon (stand) delineations and 2) incorrect polygon labels. The mapping team used the lifeform map as the foundation for the finer scale and more floristically detailed Fine Scale Vegetation and Habitat map. For example, a single polygon mapped in the lifeform map as forest might be divided into four polygons in the in the fine scale map including redwood forest, Douglas-fir forest, Oregon white oak forest, and bay forest. The fine scale vegetation and habitat map was developed using a semi-automated approach. The approach combines Ecognition segmentation, extensive field data collection, machine learning, manual editing, and expert review. Ecognition segmentation results in a refinement of the lifeform polygons. Field data collection results in a large number of training polygons labeled with their field-validated map class. Machine learning relies on the field collected data as training data and a stack of GIS datasets as predictor variables. The resulting model is used to create automated fine-scale labels countywide. Machine learning algorithms for this project included both Random Forests and Support Vector Machines (SVMs). Machine learning is followed by extensive manual editing, which is used to 1) edit segment (polygon) labels when they are incorrect and 2) edit segment (polygon) shape when necessary. The map classes in the fine scale vegetation and habitat map generally correspond to the alliance level of the National Vegetation Classification, but some map classes - especially riparian vegetation and herbaceous types - correspond to higher levels of the hierarchy (such as group or macrogroup).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Loss-less data compression becomes the need of the hour for effective data compression and computation in VLSI test vector generation and testing in addition to hardware AI/ML computations. Golomb code is one of the effective technique for lossless data compression and it becomes valid only when the divisor can be expressed as power of two. This work aims to increase compression ratio by further encoding the unary part of the Golomb Rice (GR) code so as to decrease the amount of bits used, it mainly focuses on optimizing the hardware for encoding side. The algorithm was developed and coded in Verilog and simulated using Modelsim. This code was then synthesised in Cadence Encounter RTL Synthesiser. The modifications carried out show around 6% to 19% reduction in bits used for a linearly distributed data set. Worst-case delays have been reduced by 3% to 8%. Area reduction varies from 22% to 36% for different methods. Simulation for Power consumption shows nearly 7% reduction in switching power. This ideally suggest the usage of Golomb Rice coding technique for test vector compression and data computation for multiple data types, which should ideally have a geometrical distribution.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This paper presents a novel approach for the interactive optimization of sonification parameters. In a closed loop, the system automatically generates modified versions of an initial (or previously selected) sonification via gradient ascend or evolutionary algorithms. The human listener directs the optimization process by providing relevance feedback about the perceptual quality of these propositions. In summary, the scheme allows users to bring in their perceptual capabilities without burdening them with computational tasks. It also allows for continuous update of exploration goals in the course of an exploration task. Finally, Interactive Optimization is a promising novel paradigm for solving the mapping problems and for a user-centred design of auditory display. The paper gives a full account on the technique, and demonstrates the optimization at hand of synthetic and real-world data sets. ### Sonification examples Fisher Data set (two overlapping clusters): The sonification examples are the parents of a series of optimization steps, starting with a random sonification and optimizing towards an audible separation of chlusters. SFk is the parent of iteration k. + SF1 (mp3, 28k) + SF3 (mp3, 28k) + SF5 (mp3, 28k) + SF6 (mp3, 28k) + SF7 (mp3, 28k) + SF9 (mp3, 28k) + SF10 (mp3, 28k) good audible clustering structure + SF11 (mp3, 28k) + SF12 (mp3, 28k) + SF13 (mp3, 28k) + SF14 (mp3, 28k) + SF15 (mp3, 28k) + SF17 (mp3, 28k) + SF18 (mp3, 28k) + SF19 (mp3, 28k) + SF20 (mp3, 28k) + SF21 (mp3, 28k) + SF22 (mp3, 28k) + SF23 (mp3, 28k) + SF24 (mp3, 28k) + SF25 (mp3, 28k) + SF33 (mp3, 28k) + SF34 (mp3, 28k) + SF37 (mp3, 28k) Iris data set (4d data with 3 clusters, 150 items): The sonification examples are again the parents for the successive iterations during evolutionary optimization SIk denotes (S)ound Example for (I)ris data set iteration k. + SI1 (mp3, 40k) + SI2 (mp3, 40k) + SI3 (mp3, 40k) + SI4 (mp3, 40k) + SI5 (mp3, 40k) + SI6 (mp3, 40k) + SI7 (mp3, 40k) + SI8 (mp3, 40k) + SI9 (mp3, 40k) + SI10 (mp3, 40k) The following examples are optimization results for increased attention to amplitude and panning. + SI Amplitude (mp3, 40k) + SI Panning (mp3, 40k) ### SuperCollider Extensions Find here SuperCollider classes by Thomas Hermann, which are useful for programming sonifications. #### OctaveSC OctaveSC is a class to interface with the free powerful math package octave. Description The class allows to call octave functions and execute octave instructions from sc3 transfer data between octave and sc3 use the SuperCollider rtf document editor as octave shell (tested on OSX): via CTRL-RETURN the current line or selection can be executed in octave. This allows to interleave octave code and explaning text in the same way as it can be done with sc code. #### Download OctaveSC is provided as zip archive (download (OctaveSC.zip (16kB) ]) with the OctaveSC class directory containing the class file OctaveSC.sc and a help file OctaveSC.help.rtf. See the README.txt for installation instructions and how to get started. #### Contributing to OctaveSC The standard data types (scalars, vectors, matrices) of numbers work reliable, and I find OctaveSC very useful. However, its functionality is far from complete in this version. In particular, I'd like to address for future versions the integration of high-level commands to exchange strings exchange of string matrices checks for proper dimensions when exchanging matrices and vectors automatic matrix-2-vector conversion for Nx1 matrices (which currently appear in sc as arrays of arrays with 1 element each) working with structures. Suggestions for improving OctaveSC are very welcome, please e-mail your code fragment for inclusion into the official distribution provided on this website.
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
GIS Market Size 2025-2029
The GIS market size is forecast to increase by USD 24.07 billion, at a CAGR of 20.3% between 2024 and 2029.
The Global Geographic Information System (GIS) market is experiencing significant growth, driven by the increasing integration of Building Information Modeling (BIM) and GIS technologies. This convergence enables more effective spatial analysis and decision-making in various industries, particularly in soil and water management. However, the market faces challenges, including the lack of comprehensive planning and preparation leading to implementation failures of GIS solutions. Companies must address these challenges by investing in thorough project planning and collaboration between GIS and BIM teams to ensure successful implementation and maximize the potential benefits of these advanced technologies.
By focusing on strategic planning and effective implementation, organizations can capitalize on the opportunities presented by the growing adoption of GIS and BIM technologies, ultimately driving operational efficiency and innovation.
What will be the Size of the GIS Market during the forecast period?
Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
Request Free Sample
The global Geographic Information Systems (GIS) market continues to evolve, driven by the increasing demand for advanced spatial data analysis and management solutions. GIS technology is finding applications across various sectors, including natural resource management, urban planning, and infrastructure management. The integration of Bing Maps, terrain analysis, vector data, Lidar data, and Geographic Information Systems enables precise spatial data analysis and modeling. Hydrological modeling, spatial statistics, spatial indexing, and route optimization are essential components of GIS, providing valuable insights for sectors such as public safety, transportation planning, and precision agriculture. Location-based services and data visualization further enhance the utility of GIS, enabling real-time mapping and spatial analysis.
The ongoing development of OGC standards, spatial data infrastructure, and mapping APIs continues to expand the capabilities of GIS, making it an indispensable tool for managing and analyzing geospatial data. The continuous unfolding of market activities and evolving patterns in the market reflect the dynamic nature of this technology and its applications.
How is this GIS Industry segmented?
The GIS industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.
Product
Software
Data
Services
Type
Telematics and navigation
Mapping
Surveying
Location-based services
Device
Desktop
Mobile
Geography
North America
US
Canada
Europe
France
Germany
UK
Middle East and Africa
UAE
APAC
China
Japan
South Korea
South America
Brazil
Rest of World (ROW)
By Product Insights
The software segment is estimated to witness significant growth during the forecast period.
The Global Geographic Information System (GIS) market encompasses a range of applications and technologies, including raster data, urban planning, geospatial data, geocoding APIs, GIS services, routing APIs, aerial photography, satellite imagery, GIS software, geospatial analytics, public safety, field data collection, transportation planning, precision agriculture, OGC standards, location intelligence, remote sensing, asset management, network analysis, spatial analysis, infrastructure management, spatial data standards, disaster management, environmental monitoring, spatial modeling, coordinate systems, spatial overlay, real-time mapping, mapping APIs, spatial join, mapping applications, smart cities, spatial data infrastructure, map projections, spatial databases, natural resource management, Bing Maps, terrain analysis, vector data, Lidar data, and geographic information systems.
The software segment includes desktop, mobile, cloud, and server solutions. Open-source GIS software, with its industry-specific offerings, poses a challenge to the market, while the adoption of cloud-based GIS software represents an emerging trend. However, the lack of standardization and interoperability issues hinder the widespread adoption of cloud-based solutions. Applications in sectors like public safety, transportation planning, and precision agriculture are driving market growth. Additionally, advancements in technologies like remote sensing, spatial modeling, and real-time mapping are expanding the market's scope.
Request Free Sample
The Software segment was valued at USD 5.06 billion in 2019 and sho
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The terms “biology”, “biopsy”, “biolab”, “biotin”, and “almost” are unigrams, but “cancer-surviv”, and “cancer-stage” are bigrams. Using TF/IDF weighting scores, the feature value of the term “almost” equals to zero.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an updated and extended record of the Global Fire Atlas introduced by Andela et al. (2019). Input data (burned area and land cover products) are updated to the MODIS Collection 6.1 (the previous version was based on collection 6.0 burned area and collection 5.1 land cover products, respectively). The timeseries is extended to cover the period 2002 to February 2024.
Methodological Notes:
The method employed to create the dataset precisely follows the approach described by Andela et al. (2019).
The input burned area product is MCD64A1 Collection 6.1. It is described by Giglio et al. (2018) and available at: https://lpdaac.usgs.gov/products/mcd64a1v061/.
The input land cover product is MCD12Q1 Collection 6.1. It is described by Sulla-Menashe et al. (2019) and available at: https://lpdaac.usgs.gov/products/mcd12q1v061/.
Note that while the methods have remained the same compared to Andela et al. (2019), we do observe small differences between the Global Fire Atlas products originating from differences between the MCD64A1 collection 6.1 burned area data used here and the collection 6 data used in the original product. In addition, we observe more substantial differences in the dominant land cover class associated with each fire due to the differences between the MCD12Q1 collection 6.1 data used here and collection 5.1 data used in the original product.
The original dataset included time series from 2003 to 2016, including the full fire season for each year. For each MODIS tile, the fire season is defined as the twelve months centred on the month with peak burend area (see Andela et al., 2019). Here we extended the time-series to include the fire season of 2002, and extended the time-series until February 2024. Therefore, both the 2023 and 2024 files will contain incomplete records. For example, for a MODIS tile with peak burned area in December, the 2023 fire season would be defined as the period from July 2023 to June 2024, with the current record ending in February 2024. For the purpose of time-series analysis, we note that the 2002 product may have been affected by outages of Terra-MODIS (most notably, June 15 2001 - July 3 2001 and March 19 2002 - March 28 2002), which affects the burn date estimates and Global Fire Atlas product. Following the launch of Aqua-MODIS in May 2002 burn date estimates are more reliable as estimated from both MODIS sensors onboard Terra and Aqua.
Usage Notes:
Table 1: Overview of the Global Fire Atlas data layers. The shapefiles of ignition locations (point) and fire perimeters (polygon) contain attribute tables with summary information for each individual fire, while the underlying 500 m gridded layers reflect the day-to-day behavior of the individual fires. In addition, we provide aggregated monthly summary layers at a 0.25° resolution for regional and global analyses.
File name Content
SHP_ignitions.zip Shapefiles of ignition locations with attribute tables (see Table 2)
SHP_perimeters.zip Shapefiles of final fire perimeters with attribute tables (see Table 2)
GeoTIFF_direction.zip 500 m resolution daily gridded data on direction of spread (8 classes)
GeoTIFF_day_of_burn.zip 500 m resolution daily gridded data on day of burn (day of year; 1-366)
GeoTIFF_speed.zip 500 m resolution daily gridded data on speed (km/day)
GeoTIFF_fire_line.zip 500 m resolution daily gridded data on the fire line (day of year; 1-366)
GeoTIFF_monthly_summaries.zip Aggregated 0.25° resolution monthly summary layers. These files include the sum of ignitions, average size (km2), average duration (days), average daily fire line (km), average daily fire expansion (km2/day), average speed (km/day), and dominant direction of spread (8 classes).
Table 2: Overview of the Global Fire Atlas shapefile attribute tables. The shapefiles of ignition locations (point) and fire perimeters (polygon) contain attribute tables with summary information for each individual fire.
Attribute Explanation / units
lat, lon Coordinates of ignition location (°)
size Fire size (km2)
perimeter Fire perimeter (km)
start_date, start_DOY Start date (yyyy-mm-dd), start day of year (1-366)
end_date, end_DOY End date (yyyy-mm-dd), end day of year (1-366)
duration Duration (days)
fire_line Average length of daily fire line (km)
spread Average daily fire growth (km2/day)
speed Average speed (km/day)
direction, direc_frac Dominant direction of spread (N, NE, E, SE, S, SW, W, NW) and associated fraction
MODIS_tile MODIS tile id
landcover, landc_frac MCD12Q1 dominant land cover class and fraction (UMD classification), provided for 2002-2023
GFED_regio GFED region (van der Werf et al., 2017; available at https://www.globalfiredata.org/)
File Naming Convention:
GFA_v{time-stamp}_{data-type}_{fire_season}.{file_type}
{time-stamp} = Date that code was run.
{data-type} = “ignitions” or “perimeters” for vector files; “day_of_burn”, “direction”, “fire_line”, or “speed” for raster files.
{fire_season} = the locally-defined fire season in which the fire was ignited (see more below).
{file_type} = ".shp" for vector files; ".tif" for raster files.
Fire Season Convention:
Please note that the year string in filenames refers to the locally-defined fire season in which the fire ignited, not the calendar year. Hence the file GFA_v20240409_perimeters_2003.shp can include fires from the 2003 fire season that ignited in the calendar years 2002 or 2004. This is particularly relevant in the Southern extratropics and the northern hemisphere subtropics, where the fire seasons often span the new year. The local definition of the fire season is based on climatological peak in burned area as described by Andela et al. (2019).
Projections:
Vector data are provided on the WGS84 projection.
Raster data are provided on the MODIS sinusoidal projection used in NASA tiled products.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Research ships working at sea map the seafloor. The ships collect bathymetry data. Bathymetry is the measurement of how deep the sea is. Bathymetry is the study of the shape and features of the seabed. The name comes from Greek words meaning "deep" and “measure". Backscatter is the measurement of how hard the seabed is.Bathymetry and backscatter data are collected on board boats working at sea. The boats use special equipment called a multibeam echosounder. A multibeam echosounder is a type of sonar that is used to map the seabed. Sound waves are emitted in a fan shape beneath the boat. The amount of time it takes for the sound waves to bounce off the bottom of the sea and return to a receiver is used to find out the water depth. The strength of the sound wave is used to find out how hard the bottom of the sea is. A strong sound wave indicates a hard surface (rocks, gravel), and a weak signal indicates a soft surface (silt, mud). The word backscatter comes from the fact that different bottom types “scatter” sound waves differently.Using the equipment also allows predictions as to the type of material present on the seabed e.g. rocks, pebbles, sand, mud. To confirm this, sediment samples are taken from the seabed. This process is called ground-truthing or sampling.Grab sampling is the most popular method of ground-truthing. There are three main types of grab used depending on the size of the vessel and the weather conditions; Day Grab, Shipek or Van Veen Grabs. The grabs take a sample of sediment from the surface layer of the seabed. The samples are then sent to a lab for analysis. Particle size analysis (PSA) has been carried out on samples collected since 2004. The results are used to cross-reference the seabed sediment classifications that are made from the bathymetry and backscatter datasets and are used to create seabed sediment maps (mud, sand, gravel, rock). Sediments have been classified based on percentage sand, mud and gravel (after Folk 1954).This dataset show locations that have completed samples from the seabed around Ireland. The bottom of the sea is known as the seabed or seafloor. These samples are known as grab samples. This is a dataset collected from 2001 to 2019.It is a vector dataset. Vector data portrays the world using points, lines and polygons (areas). The sample data is shown as points. Each point holds information on the surveyID, year, vessel name, sample id, instrument used, date, time, latitude, longitude, depth, report, recovery, percentage of mud, sand and gravel, description and folk classification.The dataset was mapped as part of the Irish National Seabed Survey (INSS) and INFOMAR (Integrated Mapping for the Sustainable Development of Ireland’s Marine Resource). Samples from related projects are also included: ADFish, DCU, FEAS, GATEWAYS, IMAGIN, IMES, INIS_HYRDO, JIBS, MESH, SCALLOP, SEAI and UCC.
Facebook
TwitterThe BGS Seabed Sediments 250k dataset is vector data which reflects the distribution of seabed substrate types of the UK and some of its adjacent waters (the UK Exclusive Economic Zone, EEZ) at 1:250,000 scale. This comprehensive dataset provides a digital compilation of the paper maps published by BGS at the same scale, as well as additional re-interpretations from regional geological studies. The seabed is commonly covered by sediments that form a veneer or thicker superficial layer of unconsolidated material above the bedrock. These sediments are classified based on their grain size, which reflects the environment in which they were deposited. This information is important to a range of stakeholders, including marine habitat mappers, marine spatial planners and offshore industries (in particular, the dredging and aggregate industries). This dataset was primarily based on seabed grab samples of the top 0.1 m, combined with cores, dredge samples and sidescan sonar acquired during mapping surveys since the early 1970s. Variations in data density are reflected in the detail of the mapping. The sediment divisions on the map are primarily based on particle size analysis (PSA) of both surface sediment samples and the uppermost sediments taken from shallow cores. Sediments are classified according to the modified Folk triangle classification (Folk, 1954, Journal of Geology, Vol. 62, pp 344–359). The modified Folk diagram and classification used by BGS differs from that created by Folk (1954) in that the boundary between 'no gravel' and 'slightly gravelly' is changed from trace (0.05%) to 1% weight of particles coarser than -1Ø (2mm), shown below. The boundaries between sediment classifications or types are delineated using sample station particle size analyses and descriptions, seafloor topography derived from shallow geophysical and, where available, multibeam bathymetry, backscatter and side-scan sonar profiles. This dataset was produced for use at 1:250 000 scale. These data should not be relied on for local or site-specific geology.
Facebook
TwitterThis geomorphic reference data is intended for use as a calibration and/or validation data for global scale remote sensing image based classification of coral reef areas. The data is in a vector shapefile format consisting of polygons labelled into one of thirteen coral reef geomorphic classes that include Patch Reef, Plateau, No Reef, Outer Reef Flat, Deep Water, Inner Reef Flat, Back Reef Slope, Deep Lagoon, Terrestrial Reef Flat, Shallow Lagoon, Sheltered Reef Slope, Reef Crest and Reef Slope. Please cite data as: Lyons, M., Canto, R., Borrego-Acevedo, R., Harris, D., Kennedy, E., Kovacs, E., Markey, K., Murray, N., Ordonez Alvarez, A., Phinn, S., Roe, M., Roelfsema, C., Say, C., Tudman, P., C., Wolff, J., Yuwono, D., Bambric, B., Fox, H., Lieb, Z., Asner , G., Knapp , D., Li, J., Harris, B., Larsen, K. & Rice, K. 2021. Philippines Geomorphic Type Reference Sample. University of Queensland. DOI: https://doi.org/10.6084/m9.figshare.14622771 Links to supplementary information: Mapping methods overview (https://allencoralatlas.org/methods/) UQ mapping team web site (https://www.rsrc.org.au/globalreef) Detailed mapping methods description by Lyons et al. (https://doi.org/10.1002/rse2.157) Detailed classification scheme description by Kennedy et al. (https://doi.org/10.1038/s41597-021-00958-z) Detailed reference creation description workflow by Roelfsema et al. (https://doi.org/10.3389/fmars.2021.643381) Funding: Vulcan Inc., (https://vulcan.com) Great Barrier Reef Foundation (https://www.barrierreef.org/) Acknowledgement: This work was initiated and funded primarily through Paul Allen Philanthropies and Vulcan Inc. as part of the Allen Coral Atlas. We acknowledge the late Paul Allen and Ruth Gates for their fundamental vision and drive to enable us to work together on this critical reef mapping problem. Project partners providing financial, service and personnel include: Planet Inc., National Geographic, University of Queensland, Arizona State University, and University of Hawai'i. Significant support has also been provided by Google Inc., Great Barrier Reef Foundation, and Trimble (Ecognition). Contributors to establishing and running the project include: Vulcan Inc. [James Deutsch, Lauren Kickam, Paulina Gerstner, Charlie Whiton, Kirk Larsen, Sarah Frias Torres, Kyle Rice, Eldan Goldenberg, Janet Greenlee]; Planet Inc. [Andrew Zolli, Trevor McDonald, Joe Mascaro, Joe Kington]; University of Queensland [Chris Roelfsema, Stuart Phinn, Emma Kennedy, Mitch Lyons, Nicholas Murray, Doddy Yuwono, Dan Harris, Eva Kovacs, Rodney Borrego, Meredith Roe, Jeremy Wolff, Kathryn Markey, Alexandra Ordonez, Chantal Say, Paul Tudman]; Arizona State University [Greg Asner, Dave Knapp, Jiwei Li, Yaping Xu, Nick Fabina, Heather D'Angelo]; and National Geographic [Helen Fox, Brianna Bambic, Brian Free, Zoe Lieb] and Great Barrier Reef Foundation [Petra Lundgren, Kirsty Bevan, Sarah Castine]. Attributions of contributed field data or maps for the reference sample creation are available at: https://allencoralatlas.org/attribution/
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The GDAL/OGR libraries are open-source, geo-spatial libraries that work with a wide range of raster and vector data sources. One of many impressive features of the GDAL/OGR libraries is the ViRTual (VRT) format. It is an XML format description of how to transform raster or vector data sources on the fly into a new dataset. The transformations include: mosaicking, re-projection, look-up table (raster), change data type (raster), and SQL SELECT command (vector). VRTs can be used by GDAL/OGR functions and utilities as if they were an original source, even allowing for chaining of functionality, for example: have a VRT mosaic hundreds of VRTs that use look-up tables to transform original GeoTiff files. We used the VRT format for the presentation of hydrologic model results, allowing for thousands of small VRT files representing all components of the monthly water balance to be transformations of a single land cover GeoTiff file.
Presentation at 2018 AWRA Spring Specialty Conference: Geographic Information Systems (GIS) and Water Resources X, Orlando, Florida, April 23-25, http://awra.org/meetings/Orlando2018/