21 datasets found
  1. Global 10 x 10-km grids suitable for use in IUCN Red List of Ecosystems...

    • figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicholas Murray (2023). Global 10 x 10-km grids suitable for use in IUCN Red List of Ecosystems assessments (vector and raster format) [Dataset]. http://doi.org/10.6084/m9.figshare.4653439.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Nicholas Murray
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Global 10 x 10-km grid files for use in assessing Criterion B of the IUCN Red List of Ecosystems. Each file consists of a global grid with 5086152 individually identified grid cells. Raster data. 10000m resolution. 32 Bit unsigned integer. World Cylindrical Equal Area. IMG format for use in ArcGIS, R, Erdas Imagine etc.Vector data. World Cylindrical Equal Area. Shapefile format.

  2. Copernicus Digital Elevation Model (DEM) for Europe at 3 arc seconds (ca. 90...

    • zenodo.org
    • data.opendatascience.eu
    • +2more
    bin, png, tiff, xml
    Updated Jul 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz (2024). Copernicus Digital Elevation Model (DEM) for Europe at 3 arc seconds (ca. 90 meter) resolution derived from Copernicus Global 30 meter DEM dataset [Dataset]. http://doi.org/10.5281/zenodo.6211701
    Explore at:
    png, bin, xml, tiffAvailable download formats
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    Overview:
    The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters.

    The Copernicus DEM for Europe at 3 arcsec (0:00:03 = 0.00083333333 ~ 90 meter) in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/).

    Processing steps:
    The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in VRT format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized:

    gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt

    In order to reduce the spatial resolution to 3 arc seconds, weighted resampling was performed in GRASS GIS (using r.resamp.stats -w and the pixel values were scaled with 1000 (storing the pixels as integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.

    Projection + EPSG code:
    Latitude-Longitude/WGS84 (EPSG: 4326)

    Spatial extent:
    north: 82:00:30N
    south: 18N
    west: 32:00:30W
    east: 70E

    Spatial resolution:
    3 arc seconds (approx. 90 m)

    Pixel values:
    meters * 1000 (scaled to Integer; example: value 23220 = 23.220 m a.s.l.)

    Software used:
    GDAL 3.2.2 and GRASS GIS 8.0.0 (r.resamp.stats -w; r.relief)

    Original dataset license:
    https://spacedata.copernicus.eu/documents/20126/0/CSCDA_ESA_Mission-specific+Annex.pdf

    Processed by:
    mundialis GmbH & Co. KG, Germany (https://www.mundialis.de/)

  3. R-Factor for the Conterminous United States

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Oct 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA Office for Coastal Management (Point of Contact, Custodian) (2024). R-Factor for the Conterminous United States [Dataset]. https://catalog.data.gov/dataset/r-factor-for-the-conterminous-united-states1
    Explore at:
    Dataset updated
    Oct 31, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Area covered
    Contiguous United States, United States
    Description

    The rainfall-runoff erosivity factor (R-Factor) quantifies the effects of raindrop impacts and reflects the amount and rate of runoff associated with the rain. The R-factor is one of the parameters used by the Revised Unified Soil Loss Equation (RUSLE) to estimate annual rates of erosion. This product is a raster representation of R-Factor derived from isoerodent maps published in the Agriculture Handbook Number 703 (Renard et al.,1997). Lines connecting points of equal rainfall ersoivity are called isoerodents. The iserodents plotted on a map of the coterminous U.S. were digitized, then values between these lines were obtained by linear interpolation. The final R-Factor data are in raster GeoTiff format at 800 meter resolution in Albers Conic Equal Area, GRS80, NAD83.

  4. H

    Code, data, and Raster and shape files used in the paramo soil carbon...

    • dataverse.harvard.edu
    Updated Oct 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan Benavides (2025). Code, data, and Raster and shape files used in the paramo soil carbon project [Dataset]. http://doi.org/10.7910/DVN/97RUDG
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 18, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Juan Benavides
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    PÁRAMO SOC MODELING: REPRODUCIBLE WORKFLOW ========================================== Overview -------- This repository contains two R scripts to (1) fit and validate a spatially-aware Random Forest model for soil organic carbon (SOC) in Colombian paramos, and (2) generate national wall-to-wall SOC predictions and sector-level summaries. Scripts ------- 1) soilCmodel.R - Builds land-cover labels (Disturbed, Forest, Paramo). For modeling, the former "Nosoil" class is collapsed into Disturbed. - Extracts rasters to points and clusters points on a 100 m grid to avoid leakage across train/test folds. - Runs grouped v-fold spatial cross-validation, tunes RF by inner OOB RMSE, computes diagnostics (OOB, random 5-fold, spatial CV) in SOC space using Duan smearing for unbiased back-transform. - Saves the finalized model and artifacts for prediction and reporting. 2) soilCprediction.R - Loads the finalized model and the Duan smearing factor. - Assembles the predictor stack, predicts log-SOC, applies smearing, and outputs SOC density in Mg C ha^-1. Pixels flagged as Nosoil are set to 0. - Converts density to Mg per cell using true cell area in hectares. - Aggregates totals and statistics by paramo sector and land-cover class. - Produces figures and CSVs for the paper. Directory layout (edit paths in scripts if different) ----------------------------------------------------- geo_dir = .../Paramo carbon map/GEographic stats_dir = .../Paramo carbon map/stats2 Required inputs --------------- Points (CSV): - carbon_site.csv with columns: Longitude, Latitude, CarbonMgHa Predictor rasters (aligned to land-cover grid, ~100 m): - dem3_100.tif, TPI100.tif, slope100.tif - temp2.tiff (mean T), tempmax2.tiff, precip2.tiff, soilmoist2.tiff - Cobertura100.tif (grid target) Vectors: - corine_paramo2.* (CORINE polygons; fields include corinetext, Clasificac) - paramos.* (paramo sectors; field NOMBRE_COM) - paramos_names.csv (two columns: NOMBRE_COM, Sector) for short plot labels CRS expectations: - Input points in EPSG:4326 - Clustering for spatial CV uses EPSG:3116 (MAGNA-SIRGAS / Bogota) - Rasters are internally aligned to the Cobertura100.tif grid Software requirements --------------------- Tested with R >= 4.3 and packages: terra, sf, dplyr, tidyr, ranger, rsample, yardstick, vip, ggplot2, purrr, forcats, scales, stringr, bestNormalize (optional) Install once in R: install.packages(c( "terra","sf","dplyr","tidyr","ranger","rsample","yardstick","vip", "ggplot2","purrr","forcats","scales","stringr","bestNormalize" )) Each script starts with: suppressPackageStartupMessages({ library(terra); library(sf); library(dplyr); library(tidyr) library(ranger); library(rsample); library(yardstick); library(vip) library(ggplot2); library(purrr); library(forcats); library(scales); library(stringr) }) How to run ---------- 1) Fit + validate the model Rscript soilCmodel.R Outputs (in stats_dir): - rf_full.rds (finalized ranger model) - smear_full.txt (Duan smearing factor) - variable_importance.csv (permutation importance, mean and sd) - diagnostics.txt (OOB, random 5-fold, spatial CV metrics) - OVP_spatialCV.png (observed vs predicted, pooled folds) - imp_bar_RF.png (RF importance with error bars) 2) Predict wall-to-wall + summarize Rscript soilCprediction.R Outputs (in stats_dir): - SOC_pred_final_RF_GAM.tif (SOC density, Mg C ha^-1) - SOC_totals_by_sector.csv (Tg C by sector x land-cover) - SOC_by_sector_LC_Tg_mean_sd.csv (Tg C plus area-weighted mean/sd in Mg C ha^-1) - SOC_national_mean_sd_by_LC.csv (national area-weighted mean/sd in Mg C ha^-1) - sector_bars_TgC.png (stacked bars by sector using short labels) Units ----- - SOC density outputs are in Mg C ha^-1. - Totals are in Mg and reported as Tg (Mg / 1e6). - Cell areas are computed with terra::cellSize(..., unit="m")/10000 to ensure hectares. Modeling notes -------------- - Learner: ranger Random Forest, permutation importance, respect.unordered.factors="partition". - Response transform: log or Yeo-Johnson (when enabled), with Duan smearing to remove retransformation bias when returning to SOC space. - Spatial CV: grouped v-fold using 100 m clusters to prevent leakage. - Land cover: modeling uses three classes (Disturbed includes former Nosoil). In mapping, Nosoil pixels are forced to 0 SOC. Troubleshooting --------------- - If a write fails with "source and target filename cannot be the same", write to a new filename. - If sector labels appear misaligned in plots, normalize strings and join short names via paramos_names.csv. - If national means look ~100x too small, ensure means are area-weighted over valid pixels only (LC present AND SOC not NA), and that areas are in hectares. - If any join fails, confirm the sector name field (NOMBRE_COM) exists in paramos.shp and in paramos_names.csv. Reproducibility --------------- - set.seed(120) is used throughout. - All area computations are in hectares. - Scripts are deterministic given the same inputs and package versions.

  5. d

    SWOT Level 2 Water Mask Raster Image 250m Data Product, Version 2.0

    • catalog.data.gov
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA/JPL/PODAAC (2025). SWOT Level 2 Water Mask Raster Image 250m Data Product, Version 2.0 [Dataset]. https://catalog.data.gov/dataset/swot-level-2-water-mask-raster-image-250m-data-product-version-2-0-190d0
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    NASA/JPL/PODAAC
    Description

    The SWOT Level 2 Water Mask Raster Image 250m Data Product from the Surface Water Ocean Topography (SWOT) mission provides global surface water elevation and inundation extent derived from high rate (HR) measurements from the Ka-band Radar Interferometer (KaRIn) on SWOT. SWOT launched on December 16, 2022 from Vandenberg Air Force Base in California into a 1-day repeat orbit for the "calibration" or "fast-sampling" phase of the mission, which completed in early July 2023. After the calibration phase, SWOT entered a 21-day repeat orbit in August 2023 to start the "science" phase of the mission, which is expected to continue through 2025.\r Water surface elevation, area, water fraction, backscatter, geophysical information are provided in geographically fixed scenes at 250 meter horizontal resolution in Universal Transverse Mercator (UTM) projection. Available in netCDF-4 file format. On-demand processing available to users for different resolutions, sampling grids, scene sizes, and file formats.\r This collection is a sub-collection of its parent: https://podaac.jpl.nasa.gov/dataset/SWOT_L2_HR_Raster_2.0

  6. Data and code from "Variable spatiotemporal ungulate behavioral response to...

    • figshare.com
    tiff
    Updated Oct 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah L Schooler; Nathan J. Svoboda; Kenneth F. Kellner; Ge Pu; Shannon P. Finnegan; Jerrold L. Belant (2024). Data and code from "Variable spatiotemporal ungulate behavioral response to predation risk" [Dataset]. http://doi.org/10.6084/m9.figshare.24040488.v2
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Oct 22, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Sarah L Schooler; Nathan J. Svoboda; Kenneth F. Kellner; Ge Pu; Shannon P. Finnegan; Jerrold L. Belant
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data from 'Variable spatiotemporal ungulate behavioral response to predation risk'# Sarah L. Schooler, Nathan J. Svoboda, Kenneth F. Kellner, Ge Pu, Shannon P. Finnegan, Jerrold L. Belant# Ecosphere. 2024.# Corresponding author: Sarah L. Schooler; sarahlschooler AT gmail.com## Description of the data and file structureThe data files are spatial data in the form of raster files (TIF) used as covariates in habitat suitability and movement models for elk and brown bear. Raster files named as #YEAR#_P_prj.tif, #YEAR#_S_prj.tif, and #YEAR#_A_prj.tif are NDVI files for each year-season combination. These are composites with clouds removed generated for each season using Google Earth Engine and Landsat 8 data [code to generate these composites is provided]. Raster files elevation.tif and slope.tif are predictor rasters were generated from a 30-m digital elevation model in ArcGIS. Raster files coastdist.tif and anadwaterdist.tif are predictor rasters for distance from coasts (from outline of Afognak and Raspberry islands) and distance from anadromous fish-bearing streams (from ADFG 2023 Anadromous Waters Catalog). Raster file LCTimber2016_forrisk.tif is NLCD landcover data updated with timber harvest data and recategorized as described in the manuscript. Raster files bearpred_#SEASON#_rast.tif are projected rasters of predicted bear probability of use created in bear_analysis.R and used in elk analyses. File LCCat_risk.csv is a reference file to convert the numeric landcover raster to named lancover type. File sunrise_set.csv is sunrise, sunset, and civil twilight times in Kodiak, Alaska, USA (from in-the-sky.org) for time of day categorization.The code files are either program R code for data formatting, habitat suitability and movement models, and plots for the associated manuscript, or text (.txt) files for the Google Earth Engine code to create NDVI composite images.## Sharing/Access informationSee associated manuscript for information about how data was derived.## Code/SoftwareMany packages will be required within program R. Please install any packages that are required within the code to run code. Please use the r project file (Schooleretal2024_ElkRiskResp) to ensure code works correctly.# Run the code in order:NDVI:Use the Google Earth Engine coding environment to run files NDVI_GEE_H, NDVI_GEE_P, and NDVI_GEE_S. You will have to change the dates for each year processed. This is not necessary to run the following code, as the NDVI data files for each season-year are provided.Analysis:1. (Optional) Clean up and thin bear data, generate random points, create bear habitat selection models, check model fit, create projected bear probability of use rasters, and create plots in manuscript: bear_analysis.R2. Clean up and thin elk data, generate random points, create elk habitat selection models, check model fit, create projected probability of use rasters, and create plots in manuscript: elk_habitat_analysis.R3. Re-process elk data to extract movement speed, thin data, create elk movement models, check model fit, and create plots in manuscript: Run bear habitat models and create projected suitability rasters: elk_movement_analysis.R

  7. b

    Shrubland vegetation topographic facets of Southern California

    • nde-dev.biothings.io
    • search.dataone.org
    • +2more
    zip
    Updated Jun 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Allan Hollander; Emma Underwood (2021). Shrubland vegetation topographic facets of Southern California [Dataset]. http://doi.org/10.25338/B8JW59
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 23, 2021
    Dataset provided by
    University of California, Davis
    Authors
    Allan Hollander; Emma Underwood
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Southern California, California
    Description

    To approximate the distribution of shrubland species based on their postfire reproductive strategy (resprouter, seeder, and facultative seeder) across Southern California, we created a raster layer subdividing the landscape into a number of different facet classes. This raster dataset is at 30 meters pixel resolution and contains 12 different landscape facet classes based on vegetation and physiography. Specifically, the facets included several different vegetation types based on the California Wildlife Habitat Relations (WHR) classification (three shrubland categories, annual grasslands, valley-foothill riparian woodland, and ‘other’ vegetation types) which were intersected with aspect (two classes: north or south facing) and topography (summit, ridges, slopes, valleys, flats, and depressions). The combination of factors is intended to capture warmer, more exposed vegetation types dominated by seeder species (occurring on south-facing slopes, summits and ridges) versus cooler, less exposed vegetation types associated with resprouter species (occurring on north-facing slopes, valleys, depressions, and flats).

    The dataset is a key input into a tool developed for resource managers to aid in the prioritization of restoration activities in shrublands postfire. The tool is available at https://github.com/adhollander/postfire and described in the following technical guide:

    Underwood, Emma C., and Allan D. Hollander. 2019. “Post-Fire Restoration Prioritization for Chaparral Shrublands Technical Guide.” https://github.com/adhollander/postfire/blob/master/Postfire_Restoration_Priorization_Tool_Technical_Guide.pdf

    Methods The following are the GIS processing workflow steps used to create this dataset. A diagram illustrating this workflow is in the attached file collection (SoCal_Veg_Topo_Facets_Workflow.png).

    1) Compile GIS layers. There were two input layers to the GIS workflow, a 30 meter digital elevation model for California (dem30) and a vegetation raster layer of the state from the California Department of Forestry and Fire Protection (fveg15). The 30 meter DEM was downloaded from the USGS National Map (https://www.usgs.gov/core-science-systems/national-geospatial-program/national-map). The vegetation data is the FVEG dataset published in 2015 by the California Department of Forestry and Fire Protection's Fire and Resource Assessment Program (https://frap.fire.ca.gov/media/10894/fveg15_1.zip). This is a 30 meter raster representation of statewide vegetation using the California Wildlife Habitat Relationships vegetation classification system (https://wildlife.ca.gov/Data/CWHR).

    2) Import data into GIS. Both data layers were imported into GRASS 7 for further processing, using a mask of the Southern California study region (encompassing the Angeles, Cleveland, Los Padres, and San Bernardino National Forests) to filter processing to the study footprint.

    3) Calculate aspect for elevation model. Using the command r.slope.aspect, we generated a raster layer (aspect) giving the topographic aspect (0-360 degrees) of slopes across the study region.

    4) Generate north-south aspect layer. Using the command r.mapcalc, we subdivided the aspect layer into north and south-facing slopes through creating a raster layer (nsaspect) with two categories for north and south.

    5) Generate geomorphons for study region. The geomorphon raster layer derives from the dem30m surface and classifies the landscape into 10 discrete landform types, examples being ridges, slopes, hollows, and valleys. The algorithm for geomorphon classification uses a pattern recognition approach based on line of sight analysis (Jasiewisc and Stepinski 2013) and was generated using the r.geomorphons extension for GRASS 7.

    6) Merge geomorphons with north-south aspect layer. In this step we combined the north-south aspect layer with the geomorphons layer to create a layer entitled nsgeomorphon2a. In so doing we grouped the geomorphon types spurs, slopes, and hollows into a single “slope” category and assigned these to north-facing slopes and south-facing slopes depending upon the value of the north-south aspect layer.

    7) Regroup merged layer into three groupings. In this step we took the merged nsgeomorphon2a layer and assigned the classes in it to three different physiographic groups, namely 1) flats 2) valleys, depressions, and north-facing slopes/spurs/hollows/footslopes/shoulders and 3) summits and ridges and south-facing slopes/spurs/hollows/footslopes/shoulders. This grouped layer was named nsgeomorphon2d.

    8) Reclass vegetation layer to main habitat types. The vegetation layer fveg15 contains information about many details of the vegetation, including canopy size, canopy cover, and main habitat type. This reclass step extracts the main habitat type into a separate raster named fveg15whr.

    9) Combine vegetation layer with physiography layer. Using the command r.cross, we combined the layers fveg15whr and nsgeomorphon2d into a new layer nsgeoxfvegwhr with a separate category for each combination of the raster values from the two input layers.

    10) Reclass combined layer into small set of groupings. Taking the nsgeoxfvegwhr layer, we recategorized the 196 combinations of raster values into a set of 12 different combinations using the command r.reclass. This layer is named nsgeoxfvegnbclasses. The 12 different classes generated as an output are the following, with their raster values paired with their classes:

    0 Annual grassland: south-facing slopes; summits; ridges

    1 Annual grassland: north-facing slopes; valleys; depressions; flats

    2 Chamise-redshanks chaparral: south-facing slopes; summits; ridges

    3 Chamise-redshanks chaparral: north-facing slopes; valleys; depressions; flats

    4 Mixed or montane chaparral: south-facing slopes; summits; ridges

    5 Mixed or montane chaparral: north-facing slopes; valleys; depressions; flats

    6 Valley-foothill riparian: south-facing slopes; summits; ridges

    7 Valley-foothill riparian: north-facing slopes; valleys; depressions; flats

    8 Coastal scrub: south-facing slopes; summits; ridges

    9 Coastal scrub: north-facing slopes; valleys; depressions; flats

    10 Other: south-facing slopes; summits; ridges

    11 Other: north-facing slopes; valleys; depressions; flats

    11) Export dataset. Using the command r.out.gdal, we exported the nsgeoxfvegnbclasses layer as the raster geotiff file SoCal_Veg_Topo_Facets.tif.

    The GRASS commands used for these 11 steps are below:

    r.in.gdal input="/home/adh/CARangelands/Vegetation/fveg15_11.tif" output="fveg15" memory=300 offset=0

    r.proj input="dem1sec_calif" location="CAllnad83" mapset="statewide" output="dem30m" method="bilinear" memory=300 resolution=30

    r.slope.aspect elevation=dem30m@statewide slope=slope aspect=aspect

    r.mapcalc 'nsaspect = if(aspect <= 180, 1, 2)'

    r.geomorphon --overwrite dem=dem30m@statewide forms=SoCalgeomorphons search=11 skip=4 flat=1 dist=0

    r.mapcalc --overwrite 'nsgeomorphon = if((SoCalgeomorphons@socalNF == 5 ||| SoCalgeomorphons@socalNF == 6 ||| SoCalgeomorphons@socalNF == 7) &&& nsaspect == 1, 11, if(((SoCalgeomorphons@socalNF == 5 ||| SoCalgeomorphons@socalNF == 6 ||| SoCalgeomorphons@socalNF == 7) &&& nsaspect == 2), 12, SoCalgeomorphons@socalNF))'

    r.reclass input=nsgeomorphon2a@socalNF output=nsgeomorphon2d rules=/home/adh/SantaClaraRiver/PostfireRestoration/jupyter/datasets/nsgeomorphon-reclass2d.lut

    r.reclass input="fveg15@statewide" output="fveg15whr" rules="/home/adh/CARangelands/Vegetation/fveg15whr.lut"

    r.cross --overwrite input=fveg15whr@statewide,nsgeomorphon2d@socalNF output=nsgeoxfvegwhr

    r.reclass --overwrite input=nsgeoxfvegwhr@socalNF output=nsgeoxfvegnbclasses rules=/home/adh/SantaClaraRiver/PostfireRestoration/datasets/fvegwhrtonbclasses.lut

    r.out.gdal --overwrite input=nsgeoxfvegnbclasses@socalNF output=SoCal_Veg_Topo_Facets.tif format=GTiff type=Byte createopt=COMPRESS=DEFLATE

  8. m

    Copernicus Digital Elevation Model (DEM) for Europe at 30 meter resolution...

    • data.mundialis.de
    • data.opendatascience.eu
    Updated Feb 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Copernicus Digital Elevation Model (DEM) for Europe at 30 meter resolution derived from Copernicus Global 30 meter dataset [Dataset]. https://data.mundialis.de/geonetwork/srv/search?resolution=30%20meters
    Explore at:
    Dataset updated
    Feb 23, 2022
    Description

    Here we provide a mosaic of the Copernicus DEM 30m for Europe and the corresponding hillshade derived from the GLO-30 public instance of the Copernicus DEM. The CRS is the same as the original Copernicus DEM CRS: EPSG:4326. Note that GLO-30 Public provides limited coverage at 30 meters because a small subset of tiles covering specific countries are not yet released to the public by the Copernicus Programme. Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters. The Copernicus DEM for Europe at 30 m in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/). Processing steps: The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in https://gdal.org/drivers/raster/vrt.html format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized: gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt The pixel values were scaled with 1000 (storing the pixels as integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.

  9. Wadi Hasa Sample Dataset — GRASS GIS Location

    • zenodo.org
    txt, zip
    Updated Sep 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isaac Ullah; Isaac Ullah; C Michael Barton; C Michael Barton (2025). Wadi Hasa Sample Dataset — GRASS GIS Location [Dataset]. http://doi.org/10.5281/zenodo.17162040
    Explore at:
    txt, zipAvailable download formats
    Dataset updated
    Sep 19, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Isaac Ullah; Isaac Ullah; C Michael Barton; C Michael Barton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Wadi Hasa Sample Dataset — GRASS GIS Location
    Version 1.0 (2025-09-19)

    Overview
    --------
    This archive contains a complete GRASS GIS *Location* for the Wadi Hasa region (Jordan), including base data and exemplar analyses used in the Geomorphometry chapter. It is intended for teaching and reproducible research in archaeological GIS.

    How to use
    ----------
    1) Unzip the archive into your GRASSDATA directory (or a working folder) and add the Location to your GRASS session.
    2) Start GRASS and open the included workspace (Workspace.gxw) or choose a Mapset to work in.
    3) Set the computational region to the default extent/resolution for reproducibility:
    g.region n=3444220 s=3405490 e=796210 w=733450 nsres=30 ewres=30 -p
    4) Inspect layers as needed:
    g.list type=rast,vector
    r.info

    Citation & License
    ------------------
    Please cite this dataset as:

    Isaac I. Ullah. 2025. *Wadi Hasa Sample Dataset (GRASS GIS Location)*. Zenodo. https://doi.org/10.5281/zenodo.17162040

    All contents are released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. The original Wadi Hasa survey dataset is available at: https://figshare.com/articles/dataset/Wadi_Hasa_Ancient_Pastoralism_Project/1404216 The original Wadi Hasa survey dataset is available at: https://figshare.com/articles/dataset/Wadi_Hasa_Ancient_Pastoralism_Project/1404216

    Coordinate Reference System
    ---------------------------
    - Projection: UTM, Zone 36N
    - Datum/Ellipsoid: WGS84
    - Units: meter
    - Coordinate system and units are defined in the GRASS Location (PROJ_INFO/UNITS).

    Default Region (computational extent & resolution)
    --------------------------------------------------
    - North: 3444220
    - South: 3405490
    - East: 796210
    - West: 733450
    - Resolution: 30 (NS), 30 (EW)
    - Rows x Cols: 1291 x 2092 (cells: 2700772)

    Directory / Mapset Structure
    ----------------------------
    This Location contains the following Mapsets (data subprojects), each with its own raster/vector layers and attribute tables (SQLite):
    - Boolean_Predictive_Modeling: 8 raster(s), 4 vector(s)
    - ISRIC_soilgrid: 31 raster(s), 0 vector(s)
    - Landsat_Imagery: 3 raster(s), 0 vector(s)
    - Landscape_Evolution_Modeling: 41 raster(s), 0 vector(s)
    - Least_Cost_Analysis: 13 raster(s), 4 vector(s)
    - Machine_Learning_Predictive_Modeling: 70 raster(s), 11 vector(s)
    - PERMANENT: 4 raster(s), 2 vector(s)
    - Sentinel2_Imagery: 4 raster(s), 0 vector(s)
    - Site_Buffer_Analysis: 0 raster(s), 2 vector(s)
    - Terrain_Analysis: 27 raster(s), 2 vector(s)
    - Territory_Modeling: 14 raster(s), 2 vector(s)
    - Trace21k_Paleoclimate_Downscale_Example: 4 raster(s), 2 vector(s)
    - Visibility_Analysis: 11 raster(s), 5 vector(s)

    Data Content (summary)
    ----------------------
    - Total raster maps: 230
    - Total vector maps: 34

    Raster resolutions present:
    - 10 m: 13 raster(s)
    - 30 m: 183 raster(s)
    - 208.01 m: 2 raster(s)
    - 232.42 m: 30 raster(s)
    - 1000 m: 2 raster(s)

    Major content themes include:
    - Base elevation surfaces and terrain derivatives (e.g., DEMs, slope, aspect, curvature, flow accumulation, prominence).
    - Hydrology, watershed, and stream-related layers.
    - Visibility analyses (viewsheds; cumulative viewshed analyses for Nabataean and Roman towers).
    - Movement and cost-surface analyses (isotropic/anisotropic costs, least-cost paths, time-to-travel surfaces).
    - Predictive modeling outputs (boolean/inductive/deductive; regression/classification surfaces; training/test rasters).
    - Satellite imagery products (Landsat NIR/RED/NDVI; Sentinel‑2 bands and RGB composite).
    - Soil and surficial properties (ISRIC SoilGrids 250 m products).
    - Paleoclimate downscaling examples (CHELSA TraCE21k MAT/AP).

    Vectors include:
    - Archaeological point datasets (e.g., WHS_sites, WHNBS_sites, Nabatean_Towers, Roman_Towers).
    - Derived training/testing samples and buffer polygons for modeling.
    - Stream network and paths from least-cost analyses.

    Important notes & caveats
    -------------------------
    - Mixed resolutions: Analyses span 10 m (e.g., Sentinel‑2 composites, some derived surfaces), 30 m (majority of terrain and modeling rasters), ~232 m (SoilGrids products), and 1 km (CHELSA paleoclimate). Set the computational region appropriately (g.region) before processing or visualization.
    - NoData handling: The raw SRTM import (Hasa_30m_SRTM) reports extreme min/max values caused by nodata placeholders. Use the clipped/processed DEMs (e.g., Hasa_30m_clipped_wshed*) and/or set nodata with r.null as needed.
    - Masks: MASK rasters are provided for analysis subdomains where relevant.
    - Attribute tables: Vector attribute data are stored in per‑Mapset SQLite databases (sqlite/sqlite.db) and connected via layer=1.

    Provenance (brief)
    ------------------
    - Primary survey points and site datasets derive from the Wadi Hasa projects (see Figshare record above).
    - Base elevation and terrain derivatives are built from SRTM and subsequently processed/clipped for the watershed.
    - Soil variables originate from ISRIC SoilGrids (~250 m).
    - Paleoclimate examples use CHELSA TraCE21k surfaces (1 km) that are interpolated to higher resolutions for demonstration.
    - Satellite imagery layers are derived from Landsat and Sentinel‑2 scenes.

    Reproducibility & quick commands
    --------------------------------
    - Restore default region: g.region n=3444220 s=3405490 e=796210 w=733450 nsres=30 ewres=30 -p
    - Set region to a raster: g.region raster=

    Change log
    ----------
    - v1.0: Initial public release of the teaching Location on Zenodo (CC BY 4.0).

    Contact
    -------
    For questions, corrections, or suggestions, please contact Isaac I. Ullah

  10. d

    Processed Bathymetry and Sidescan Rasters (GEOTIFF format) derived from...

    • search.dataone.org
    • get.iedadata.org
    • +1more
    Updated Mar 4, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IEDA: Marine-Geo Digital Library (2019). Processed Bathymetry and Sidescan Rasters (GEOTIFF format) derived from Interferometric Sonar Data from the Long Island Sound Estuary assembled as part of the LIS:URI Data Compilation [Dataset]. https://search.dataone.org/view/http%3A%2F%2Fget.iedadata.org%2Fmetadata%2Fiso%2F321342
    Explore at:
    Dataset updated
    Mar 4, 2019
    Dataset provided by
    IEDA: Marine-Geo Digital Library
    Area covered
    Description

    This data set was acquired with a Interferometric Sonar assembled as part of the LIS:URI data compilation (Chief Scientist: Dr. John King). These data files are of GeoTIFF (Raster) format and include Bathymetry and Sidescan data and were processed after data collection. Funding was provided by NSF grant(s):LIS-LISMARC-2012.

  11. SWOT Level 2 Water Mask Raster Image 250m Data Product, Version 2.0 -...

    • data.nasa.gov
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SWOT Level 2 Water Mask Raster Image 250m Data Product, Version 2.0 - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/swot-level-2-water-mask-raster-image-250m-data-product-version-2-0
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The SWOT Level 2 Water Mask Raster Image 250m Data Product from the Surface Water Ocean Topography (SWOT) mission provides global surface water elevation and inundation extent derived from high rate (HR) measurements from the Ka-band Radar Interferometer (KaRIn) on SWOT. SWOT launched on December 16, 2022 from Vandenberg Air Force Base in California into a 1-day repeat orbit for the "calibration" or "fast-sampling" phase of the mission, which completed in early July 2023. After the calibration phase, SWOT entered a 21-day repeat orbit in August 2023 to start the "science" phase of the mission, which is expected to continue through 2025.\r Water surface elevation, area, water fraction, backscatter, geophysical information are provided in geographically fixed scenes at 250 meter horizontal resolution in Universal Transverse Mercator (UTM) projection. Available in netCDF-4 file format. On-demand processing available to users for different resolutions, sampling grids, scene sizes, and file formats.\r This collection is a sub-collection of its parent: https://podaac.jpl.nasa.gov/dataset/SWOT_L2_HR_Raster_2.0

  12. Z

    Land use and cover (LUC) rasters of the São Lourenço River Basin (2002 -...

    • data.niaid.nih.gov
    Updated Mar 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Exavier, Réginal; Zeilhofer, Peter (2020). Land use and cover (LUC) rasters of the São Lourenço River Basin (2002 - 2014) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3685229
    Explore at:
    Dataset updated
    Mar 4, 2020
    Authors
    Exavier, Réginal; Zeilhofer, Peter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The LUC dataset of São Lourenço river basin, a major Pantanal wetland contribution area as provided by the 4th edition of the Monitoring of Changes in Land cover and Land Use in the Upper Paraguay River Basin - Brazilian portion - Review Period: 2012 to 2014 (Embrapa Pantanal, Instituto SOS Pantanal, and WWF-Brasil 2015). For the development of the OpenLand R package (tests and vignettes), the original multi-year shape file was clipped to the extent of São Lourenço basin, transformed into a 5-layer RasterStack and then saved as .RDA file which can be loaded into R (R Core Team, 2019). Five LUC maps (2002, 2008, 2010, 2012 and 2014) compose the time series. The study area of approximately 22,400 km2 is located in the Cerrado Savannah biom in the southeast of the Brazilian state of Mato Grosso.

    The category names and colors to be associated with the pixel values follow the conventions given by Instituto SOS Pantanal and WWF-Brasil (2015) (access document here, page 17). The Portuguese legend acronyms were maintained as defined in the original dataset.

    The original legend from SOS Pantanal

    Pixel ValueLegendClassUseCategoryColour
    2ApAnthropogenicAnthropogenic UseCattle farming#FFE4B5
    3FFNaturalNAForest formation#228B22
    4SANaturalNAPark savanna#00FF00
    5SGNaturalNAGramineous savanna#CAFF70
    7aaAnthropogenicNAAnthropogenized vegetation#EE6363
    8SFNaturalNAWooded savanna#00CD00
    9AguaNaturalNAWater bodies#436EEE
    10IuAnthropogenicAnthropogenic UseUrban areas#FFAEB9
    11AcAnthropogenicAnthropogenic UseCrop farming#FFA54F
    12RAnthropogenicAnthropogenic UseReforestation#68228B
    13ImAnthropogenicAnthropogenic UseMining areas#636363
  13. r

    Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3"...

    • researchdata.edu.au
    • data.csiro.au
    datadownload
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mike Grundy; Mark Thomas; Ross Searle; John Wilford; Searle, Ross (2024). Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3" resolution) - Release 2 [Dataset]. http://doi.org/10.4225/08/55C9472F05295
    Explore at:
    datadownloadAvailable download formats
    Dataset updated
    Aug 28, 2024
    Dataset provided by
    Commonwealth Scientific and Industrial Research Organisation
    Authors
    Mike Grundy; Mark Thomas; Ross Searle; John Wilford; Searle, Ross
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1900 - Dec 31, 2013
    Area covered
    Description

    This is Version 2 of the Depth of Regolith product of the Soil and Landscape Grid of Australia (produced 2015-06-01).

    The Soil and Landscape Grid of Australia has produced a range of digital soil attribute products. The digital soil attribute maps are in raster format at a resolution of 3 arc sec (~90 x 90 m pixels).

    Attribute Definition: The regolith is the in situ and transported material overlying unweathered bedrock; Units: metres; Spatial prediction method: data mining using piecewise linear regression; Period (temporal coverage; approximately): 1900-2013; Spatial resolution: 3 arc seconds (approx 90m); Total number of gridded maps for this attribute:3; Number of pixels with coverage per layer: 2007M (49200 * 40800); Total size before compression: about 8GB; Total size after compression: about 4GB; Data license : Creative Commons Attribution 4.0 (CC BY); Variance explained (cross-validation): R^2 = 0.38; Target data standard: GlobalSoilMap specifications; Format: GeoTIFF. Lineage: The methodology consisted of the following steps: (i) drillhole data preparation, (ii) compilation and selection of the environmental covariate raster layers and (iii) model implementation and evaluation.

    Drillhole data preparation: Drillhole data was sourced from the National Groundwater Information System (NGIS) database. This spatial database holds nationally consistent information about bores that were drilled as part of the Bore Construction Licensing Framework (http://www.bom.gov.au/water/groundwater/ngis/). The database contains 357,834 bore locations with associated lithology, bore construction and hydrostratigraphy records. This information was loaded into a relational database to facilitate analysis.

    Regolith depth extraction: The first step was to recognise and extract the boundary between the regolith and bedrock within each drillhole record. This was done using a key word look-up table of bedrock or lithology related words from the record descriptions. 1,910 unique descriptors were discovered. Using this list of new standardised terms analysis of the drillholes was conducted, and the depth value associated with the word in the description that was unequivocally pointing to reaching fresh bedrock material was extracted from each record using a tool developed in C# code.

    The second step of regolith depth extraction involved removal of drillhole bedrock depth records deemed necessary because of the “noisiness” in depth records resulting from inconsistencies we found in drilling and description standards indentified in the legacy database.

    On completion of the filtering and removal of outliers the drillhole database used in the model comprised of 128,033 depth sites.

    Selection and preparation of environmental covariates The environmental correlations style of DSM applies environmental covariate datasets to predict target variables, here regolith depth. Strongly performing environmental covariates operate as proxies for the factors that control regolith formation including climate, relief, parent material organisms and time.

    Depth modelling was implemented using the PC-based R-statistical software (R Core Team, 2014), and relied on the R-Cubist package (Kuhn et al. 2013). To generate modelling uncertainty estimates, the following procedures were followed: (i) the random withholding of a subset comprising 20% of the whole depth record dataset for external validation; (ii) Bootstrap sampling 100 times of the remaining dataset to produce repeated model training datasets, each time. The Cubist model was then run repeated times to produce a unique rule set for each of these training sets. Repeated model runs using different training sets, a procedure referred to as bagging or bootstrap aggregating, is a machine learning ensemble procedure designed to improve the stability and accuracy of the model. The Cubist rule sets generated were then evaluated and applied spatially calculating a mean predicted value (i.e. the final map). The 5% and 95% confidence intervals were estimated for each grid cell (pixel) in the prediction dataset by combining the variance from the bootstrapping process and the variance of the model residuals. Version 2 differs from version 1, in that the modelling of depths was performed on the log scale to better conform to assumptions of normality used in calculating the confidence intervals. The method to estimate the confidence intervals was improved to better represent the full range of variability in the modelling process. (Wilford et al, in press)

  14. Climate Niche Breadth (CNB)

    • figshare.com
    zip
    Updated Sep 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neus Nualart (2022). Climate Niche Breadth (CNB) [Dataset]. http://doi.org/10.6084/m9.figshare.20863528.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 3, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Neus Nualart
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains the raster layers (in .TIF format) and the R scripts used to generate the map of Climate Niche Breadth (CNB). CNB was calculated using the R as follows: first we calculated the standard deviation (SD) for each bioclimatic variable of worldclim, subsequently we normalized all values between 0 and 1 and, finally, we summed all SD for each pixel. Higher values represent broad (climate) niches whereas lower values narrow niches.

  15. Data from: A dataset to model Levantine landcover and land-use change...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Dec 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Kempf; Michael Kempf (2023). A dataset to model Levantine landcover and land-use change connected to climate change, the Arab Spring and COVID-19 [Dataset]. http://doi.org/10.5281/zenodo.10396148
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 16, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael Kempf; Michael Kempf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 16, 2023
    Area covered
    Levant
    Description

    Overview

    This dataset is the repository for the following paper submitted to Data in Brief:

    Kempf, M. A dataset to model Levantine landcover and land-use change connected to climate change, the Arab Spring and COVID-19. Data in Brief (submitted: December 2023).

    The Data in Brief article contains the supplement information and is the related data paper to:

    Kempf, M. Climate change, the Arab Spring, and COVID-19 - Impacts on landcover transformations in the Levant. Journal of Arid Environments (revision submitted: December 2023).

    Description/abstract

    The Levant region is highly vulnerable to climate change, experiencing prolonged heat waves that have led to societal crises and population displacement. Since 2010, the area has been marked by socio-political turmoil, including the Syrian civil war and currently the escalation of the so-called Israeli-Palestinian Conflict, which strained neighbouring countries like Jordan due to the influx of Syrian refugees and increases population vulnerability to governmental decision-making. Jordan, in particular, has seen rapid population growth and significant changes in land-use and infrastructure, leading to over-exploitation of the landscape through irrigation and construction. This dataset uses climate data, satellite imagery, and land cover information to illustrate the substantial increase in construction activity and highlights the intricate relationship between climate change predictions and current socio-political developments in the Levant.

    Folder structure

    The main folder after download contains all data, in which the following subfolders are stored are stored as zipped files:

    “code” stores the above described 9 code chunks to read, extract, process, analyse, and visualize the data.

    “MODIS_merged” contains the 16-days, 250 m resolution NDVI imagery merged from three tiles (h20v05, h21v05, h21v06) and cropped to the study area, n=510, covering January 2001 to December 2022 and including January and February 2023.

    “mask” contains a single shapefile, which is the merged product of administrative boundaries, including Jordan, Lebanon, Israel, Syria, and Palestine (“MERGED_LEVANT.shp”).

    “yield_productivity” contains .csv files of yield information for all countries listed above.

    “population” contains two files with the same name but different format. The .csv file is for processing and plotting in R. The .ods file is for enhanced visualization of population dynamics in the Levant (Socio_cultural_political_development_database_FAO2023.ods).

    “GLDAS” stores the raw data of the NASA Global Land Data Assimilation System datasets that can be read, extracted (variable name), and processed using code “8_GLDAS_read_extract_trend” from the respective folder. One folder contains data from 1975-2022 and a second the additional January and February 2023 data.

    “built_up” contains the landcover and built-up change data from 1975 to 2022. This folder is subdivided into two subfolder which contain the raw data and the already processed data. “raw_data” contains the unprocessed datasets and “derived_data” stores the cropped built_up datasets at 5 year intervals, e.g., “Levant_built_up_1975.tif”.

    Code structure

    1_MODIS_NDVI_hdf_file_extraction.R


    This is the first code chunk that refers to the extraction of MODIS data from .hdf file format. The following packages must be installed and the raw data must be downloaded using a simple mass downloader, e.g., from google chrome. Packages: terra. Download MODIS data from after registration from: https://lpdaac.usgs.gov/products/mod13q1v061/ or https://search.earthdata.nasa.gov/search (MODIS/Terra Vegetation Indices 16-Day L3 Global 250m SIN Grid V061, last accessed, 09th of October 2023). The code reads a list of files, extracts the NDVI, and saves each file to a single .tif-file with the indication “NDVI”. Because the study area is quite large, we have to load three different (spatially) time series and merge them later. Note that the time series are temporally consistent.


    2_MERGE_MODIS_tiles.R


    In this code, we load and merge the three different stacks to produce large and consistent time series of NDVI imagery across the study area. We further use the package gtools to load the files in (1, 2, 3, 4, 5, 6, etc.). Here, we have three stacks from which we merge the first two (stack 1, stack 2) and store them. We then merge this stack with stack 3. We produce single files named NDVI_final_*consecutivenumber*.tif. Before saving the final output of single merged files, create a folder called “merged” and set the working directory to this folder, e.g., setwd("your directory_MODIS/merged").


    3_CROP_MODIS_merged_tiles.R


    Now we want to crop the derived MODIS tiles to our study area. We are using a mask, which is provided as .shp file in the repository, named "MERGED_LEVANT.shp". We load the merged .tif files and crop the stack with the vector. Saving to individual files, we name them “NDVI_merged_clip_*consecutivenumber*.tif. We now produced single cropped NDVI time series data from MODIS.
    The repository provides the already clipped and merged NDVI datasets.


    4_TREND_analysis_NDVI.R


    Now, we want to perform trend analysis from the derived data. The data we load is tricky as it contains 16-days return period across a year for the period of 22 years. Growing season sums contain MAM (March-May), JJA (June-August), and SON (September-November). December is represented as a single file, which means that the period DJF (December-February) is represented by 5 images instead of 6. For the last DJF period (December 2022), the data from January and February 2023 can be added. The code selects the respective images from the stack, depending on which period is under consideration. From these stacks, individual annually resolved growing season sums are generated and the slope is calculated. We can then extract the p-values of the trend and characterize all values with high confidence level (0.05). Using the ggplot2 package and the melt function from reshape2 package, we can create a plot of the reclassified NDVI trends together with a local smoother (LOESS) of value 0.3.
    To increase comparability and understand the amplitude of the trends, z-scores were calculated and plotted, which show the deviation of the values from the mean. This has been done for the NDVI values as well as the GLDAS climate variables as a normalization technique.


    5_BUILT_UP_change_raster.R


    Let us look at the landcover changes now. We are working with the terra package and get raster data from here: https://ghsl.jrc.ec.europa.eu/download.php?ds=bu (last accessed 03. March 2023, 100 m resolution, global coverage). Here, one can download the temporal coverage that is aimed for and reclassify it using the code after cropping to the individual study area. Here, I summed up different raster to characterize the built-up change in continuous values between 1975 and 2022.


    6_POPULATION_numbers_plot.R


    For this plot, one needs to load the .csv-file “Socio_cultural_political_development_database_FAO2023.csv” from the repository. The ggplot script provided produces the desired plot with all countries under consideration.


    7_YIELD_plot.R


    In this section, we are using the country productivity from the supplement in the repository “yield_productivity” (e.g., "Jordan_yield.csv". Each of the single country yield datasets is plotted in a ggplot and combined using the patchwork package in R.


    8_GLDAS_read_extract_trend


    The last code provides the basis for the trend analysis of the climate variables used in the paper. The raw data can be accessed https://disc.gsfc.nasa.gov/datasets?keywords=GLDAS%20Noah%20Land%20Surface%20Model%20L4%20monthly&page=1 (last accessed 9th of October 2023). The raw data comes in .nc file format and various variables can be extracted using the [“^a variable name”] command from the spatraster collection. Each time you run the code, this variable name must be adjusted to meet the requirements for the variables (see this link for abbreviations: https://disc.gsfc.nasa.gov/datasets/GLDAS_CLSM025_D_2.0/summary, last accessed 09th of October 2023; or the respective code chunk when reading a .nc file with the ncdf4 package in R) or run print(nc) from the code or use names(the spatraster collection).
    Choosing one variable, the code uses the MERGED_LEVANT.shp mask from the repository to crop and mask the data to the outline of the study area.
    From the processed data, trend analysis are conducted and z-scores were calculated following the code described above. However, annual trends require the frequency of the time series analysis to be set to value = 12. Regarding, e.g., rainfall, which is measured as annual sums and not means, the chunk r.sum=r.sum/12 has to be removed or set to r.sum=r.sum/1 to avoid calculating annual mean values (see other variables). Seasonal subset can be calculated as described in the code. Here, 3-month subsets were chosen for growing seasons, e.g. March-May (MAM), June-July (JJA), September-November (SON), and DJF (December-February, including Jan/Feb of the consecutive year).
    From the data, mean values of 48 consecutive years are calculated and trend analysis are performed as describe above. In the same way, p-values are extracted and 95 % confidence level values are marked with dots on the raster plot. This analysis can be performed with a much longer time series, other variables, ad different spatial extent across the globe due to the availability of the GLDAS variables.

  16. World Countries (shapefile/raster): Natural Earth

    • kaggle.com
    zip
    Updated Nov 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GeorgeAM (2021). World Countries (shapefile/raster): Natural Earth [Dataset]. https://www.kaggle.com/datasets/georgeam/world-countries-shapefile-natural-earth-data/code
    Explore at:
    zip(777833 bytes)Available download formats
    Dataset updated
    Nov 30, 2021
    Authors
    GeorgeAM
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    World
    Description

    Context

    When I started exploring how to create interactive maps (using the leaflet() package in R) I come across this free data set (shapefile format) that contains the geographical coordinates (polygons) for all the countries in the world. I thought it would be nice to share this with the Kaggle community.

    Content

    The .zip folder contains all the necessary files needed for the shapefile data to work properly on your computer. If you are new to using the shapefile format, please see the information provided below:

    https://en.wikipedia.org/wiki/Shapefile "The shapefile format stores the data as primitive geometric shapes like points, lines, and polygons. These shapes, together with data attributes that are linked to each shape, create the representation of the geographic data. The term "shapefile" is quite common, but the format consists of a collection of files with a common filename prefix, stored in the same directory. The three mandatory files have filename extensions .shp, .shx, and .dbf. The actual shapefile relates specifically to the .shp file, but alone is incomplete for distribution as the other supporting files are required. "

    Acknowledgements

    Made with Natural Earth. Free vector and raster map data @ naturalearthdata.com.

  17. T

    HECRAS_Sims_Inundation_Depth_ByQ

    • dataverse.tdl.org
    zip
    Updated Jun 26, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cesar R Castillo; Cesar R Castillo (2020). HECRAS_Sims_Inundation_Depth_ByQ [Dataset]. http://doi.org/10.18738/T8/4V9QSJ
    Explore at:
    zip(2013948), zip(2662149), zip(41610002), zip(6210844), zip(90811300), zip(107741489), zip(46573831), zip(82835645), zip(97498263), zip(5026224), zip(23193989), zip(34900572), zip(4453134), zip(3260287), zip(9106348), zip(29439505), zip(60747493), zip(52077986), zip(105848461), zip(3697083), zip(14218325), zip(69340985), zip(75465474)Available download formats
    Dataset updated
    Jun 26, 2020
    Dataset provided by
    Texas Data Repository
    Authors
    Cesar R Castillo; Cesar R Castillo
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Geospatial rasters in GeoTIFF format of the depth of inundation for the 23 steady HEC-RAS simulations that span the historical record for Mission River. The river discharge (Q) values range from 3 to 2186 cubic-meters per second (cms) The Q values associated with each raster dataset are in the filenames.

  18. Z

    Potential Natural Vegetation of Eastern Africa (Burundi, Ethiopia, Kenya,...

    • data.niaid.nih.gov
    • nde-dev.biothings.io
    Updated May 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lillesø, Jens-Peter Barnekow; van Breugel, Paulo; Kindt, Roeland; Bingham, Mike; Demissew, Sebsebe; Dudley, Cornell; Friis, Ib; Gachathi, Francis; Kalema, James; Mbago, Frank; Minani, Vedaste; Moshi, Heriel; Mulumba, John; Namaganda, Mary; Ndangalasi, Henry; Ruffo, Christopher; Jamnadass, Ramni; Graudal, Lars (2024). Potential Natural Vegetation of Eastern Africa (Burundi, Ethiopia, Kenya, Malawi, Rwanda, Tanzania, Uganda and Zambia): raster and vector GIS files for each country [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11125644
    Explore at:
    Dataset updated
    May 10, 2024
    Dataset provided by
    University of Copenghagen
    National Agricultural Research Organisation
    World Agroforestry Centre
    University of Copenhagen
    Addis Ababa University College of Natural Sciences
    HAS green academy
    University of Dar es Salaam
    Makerere University
    Authors
    Lillesø, Jens-Peter Barnekow; van Breugel, Paulo; Kindt, Roeland; Bingham, Mike; Demissew, Sebsebe; Dudley, Cornell; Friis, Ib; Gachathi, Francis; Kalema, James; Mbago, Frank; Minani, Vedaste; Moshi, Heriel; Mulumba, John; Namaganda, Mary; Ndangalasi, Henry; Ruffo, Christopher; Jamnadass, Ramni; Graudal, Lars
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    East Africa, Africa, Malawi, Burundi, Zambia, Ethiopia, Tanzania, Uganda, Kenya, Rwanda
    Description

    The map of potential natural vegetation of eastern Africa (V4A) gives the distribution of potential natural vegetation in Ethiopia, Kenya, Tanzania, Uganda, Rwanda, Burundi, Malawi and Zambia.

    The map is based on national and local vegetation maps constructed from botanical field surveys - mainly carried out in the two decades after 1950 - in combination with input from national botanical experts. Potential natural vegetation (PNV) is defined as “vegetation that would persist under the current conditions without human interventions”. As such, it can be considered a baseline or null model to assess the vegetation that could be present in a landscape under the current climate and edaphic conditions and used as an input to model vegetation distribution under changing climate.

    Vegetation types are defined by their tree species composition, and the documentation of the maps thus includes the potential distribution for more than a thousand tree and shrub species, see the documentation (https://vegetationmap4africa.org/species.html)

    The map distinguishes 48 vegetation types, divided in four main vegetation groups: 16 forest types, 15 woodland and wooded grassland types, 5 bushland and thicket types and 12 other types. The map is available in various formats. The online version (https://vegetationmap4africa.org/vegetation_map.html) and for PDF versions of the map, see the documentation (https://vegetationmap4africa.org/documentation.html). Version 2.0 of the potential natural vegetation map and the woody species selection tool was published in 2015 (https://vegetationmap4africa.org/docs/versionhistory/). The original data layers include country-specific vegetation types to maintain the maximum level of information available. This map might be most suitable when carrying out analysis at the national or sub-national level.

    When using V4A in your work, cite the publication: Lillesø, J-P.B., van Breugel, P., Kindt, R., Bingham, M., Demissew, S., Dudley, C., Friis, I., Gachathi, F., Kalema, J., Mbago, F., Minani, V., Moshi, H., Mulumba, J., Namaganda, M., Ndangalasi, H., Ruffo, C., Jamnadass, R. & Graudal, L. 2011, Potential Natural Vegetation of Eastern Africa (Ethiopia, Kenya, Malawi, Rwanda, Tanzania, Uganda and Zambia). Volume 1: The Atlas. 61 ed. Forest & Landscape, University of Copenhagen. 155 p. (Forest & Landscape Working Papers; 61 - as well as this repository using the DOI .

    The development of V4A was mainly funded by the Rockefeller Foundation and supported by University of Copenhagen

    If you want to use the potential natural vegetation map of eastern Africa for your analysis, you can download the spatial data layers in raster format as well as in vector format from this repository

    A simplified version of the map can be found on Figshare . That version aggregates country specific vegetation types into regional types. This might be the better option when doing regional-level assessments.

  19. FGARA Digital Soil Mapping Output - Electrical Conductivity of Soil Surface

    • data.csiro.au
    • researchdata.edu.au
    Updated Feb 19, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rebecca Bartley; Mark Thomas; David Clifford; Seonaid Philip; Dan Brough; Ben Harms; Reanna Willis; Linda Gregory; Mark Glover; Keith Moodie; Mark Sugars; Lauren Eyre; Doug Smith; Warren Hicks; Cuan Petheram (2014). FGARA Digital Soil Mapping Output - Electrical Conductivity of Soil Surface [Dataset]. http://doi.org/10.4225/08/5304247BB00A8
    Explore at:
    Dataset updated
    Feb 19, 2014
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Rebecca Bartley; Mark Thomas; David Clifford; Seonaid Philip; Dan Brough; Ben Harms; Reanna Willis; Linda Gregory; Mark Glover; Keith Moodie; Mark Sugars; Lauren Eyre; Doug Smith; Warren Hicks; Cuan Petheram
    License

    https://research.csiro.au/dap/licences/csiro-data-licence/https://research.csiro.au/dap/licences/csiro-data-licence/

    Time period covered
    Sep 1, 2013 - Present
    Area covered
    Dataset funded by
    CSIROhttp://www.csiro.au/
    Office of Northern Australia
    Queensland Department of Natural Resources and Mines
    Queensland Department of Science, Information Technology, Innovation and the Arts (DSITIA)
    Description

    Electrical conductivity of soil surface is one of 19 attributes of soils chosen to underpin the land suitability assessment of the Flinders and Gilbert Agricultural Resource Assessment (FGARA) project through the digital soil mapping process (DSM). This raster data (in GeoTIFF format) represents a modelled surface of electrical conductivity of the soil surface (<0.10m) measured in 0.00 dS/m (decisiemens per meter) and is derived from measured site data and environmental covariates. The data is used in assessment of salinity and effects soil water holding capacity. The attribute data file name is "ECPredictions.tif". Also included are data reflecting confidence of the main dataset. These file names follow the convention "EC_SD.tif". "SD" represents “standard deviation”. The DSM process is described in the technical report: Bartley R, Thomas MF, Clifford D, Phillip S, Brough D, Harms D, Willis R, Gregory L, Glover M, Moodie K, Sugars M, Eyre L, Smith DJ, Hicks W and Petheram C (2013) Land suitability: technical methods. A technical report to the Australian Government for the Flinders and Gilbert Agricultural Resource Assessment (FGARA) project, CSIRO. This raster data provides improved soil information to identify opportunities and promote detailed investigation for a range of sustainable development options and was created within the “Land Suitability” component of FGARA projects. Lineage: This data has been created from a range of inputs and processing steps. Below is an overview. Broadly, the steps were to: 1. Collate existing data (data related to: climate, topography, soils, natural resources, remotely sensed etc of various formats; reports, spatial vector, spatial raster etc.). 2. Select additional soil and attribute site data by Latin hypercube statistical sampling method applied across the covariate space. 3. Carry out fieldwork to collect additional soil and attribute data and understand geomorphology and landscapes. 4. Build models from selected input data and covariate data using predictive learning via rule ensembles in the RuleFit3 software. 5. Create Electrical conductivity of the soil surface Digital Soil Mapping (DSM) key attribute output data. DSM is the creation and population of a geo-referenced database, generated using field and laboratory observations, coupled with environmental data through quantitative relationships. It applies pedometrics - the use of mathematical and statistical models that combine information from soil observations with information contained in correlated environmental variables, remote sensing images and some geophysical measurements. Quality assessment of the attribute data is mapped spatially as a function of the model output by evaluating the rigour of the DSM attribute data using non-parametric bootstrapping of the DSM modelling. For more information refer to “Land suitability: technical methods. A technical report to the Australian Government for the Flinders and Gilbert Agricultural Resource Assessment (FGARA) project”.

  20. Z

    Evidence-based guidelines for developing automated conservation assessment...

    • data.niaid.nih.gov
    Updated Jun 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Walker, Barnaby E.; Leão, Tarciso C.C.; Bachman, Steven P.; Lucas, Eve; Nic Lughadha, Eimear (2021). Evidence-based guidelines for developing automated conservation assessment methods (script outputs) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4899924
    Explore at:
    Dataset updated
    Jun 5, 2021
    Dataset provided by
    Royal Botanic Gardens, Kew, London, UK
    Authors
    Walker, Barnaby E.; Leão, Tarciso C.C.; Bachman, Steven P.; Lucas, Eve; Nic Lughadha, Eimear
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Script outputs for the paper "Evidence-based guidelines for developing automated conservation assessment methods".

    The code used to generate these outputs can be found on GitHub.

    To use these outputs, download the code, download this dataset and extract the dataset in the project folder. Some outputs are in the RData format, including all of the trained models. To view these files in R you may need to install the packages listed in the README of the GitHub project.

    The outputs are arranged in this file structure:

    output

    cleaned_occurrences: CSV files containing the GBIF ID of all occurrence records retained after each cleaning step, and the IPNI ID of the species they relate to. Generated by the script 05_clean_occurrences.R.

    explanations: SHapely Additive exPlanations for an example set of predictions. Generated by the script 08_calculate_explanations.R.

    model_results: CSV files with the evaluation results for each model, on each study group, after each cleaning step. There are results for the method performance, learning curves, and permutation importance (random forest models only), as well as predictions for test sets and unassessed species. Generated by the script 07_evaluate_methods.R.

    models: RData files containing the trained models, generated by the script 07_evaluate_methods.R.

    name_matching: CSV files with the results of matching IUCN Red List assessment and GBIF names to WCVP taxonomy, as well as JSON files used to manually resolve ambiguous and missing matches. Generated by the scripts 02_collate_species.R and 03_process_occurrences.R.

    predictors: CSV files with species-level predictors calculated from the cleaned occurrence files, ready for input into automated assessment methods. Generated by the script 06_prepare_predictors.R.

    rasters: Processed raster files used to calculate species-level predictors. Generated by the script 01_process_rasters.R.

    results: CSV files of summarised results, generated by the script 09_summarise_results.R.

    {group}_distributions.csv: CSV files with the distribution for species in each study group, downloaded from POWO by the script 02_collate_species.R.

    {group}-{source}_species-list.csv: The list of species for each study group along with their IUCN Red List category if they have been assessed, generated by the script 02_collate_species.R. The 'source' refers to if the assessments were from the IUCN Red List or Sampled Red List Index.

    {group}-GBIF_occurrences.csv: The occurrence records for each species group, downloaded from GBIF. Generated by the script 03_process_occurrences.R.

    {group}-GBIF_labelled-occurrences.csv: The occurrence records for each species group labelled with values extracted at their coordinates from the rasters in the rasters folder. Generated by the script 04_annotate_points.R.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nicholas Murray (2023). Global 10 x 10-km grids suitable for use in IUCN Red List of Ecosystems assessments (vector and raster format) [Dataset]. http://doi.org/10.6084/m9.figshare.4653439.v1
Organization logoOrganization logo

Global 10 x 10-km grids suitable for use in IUCN Red List of Ecosystems assessments (vector and raster format)

Explore at:
5 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
May 30, 2023
Dataset provided by
figshare
Figsharehttp://figshare.com/
Authors
Nicholas Murray
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Global 10 x 10-km grid files for use in assessing Criterion B of the IUCN Red List of Ecosystems. Each file consists of a global grid with 5086152 individually identified grid cells. Raster data. 10000m resolution. 32 Bit unsigned integer. World Cylindrical Equal Area. IMG format for use in ArcGIS, R, Erdas Imagine etc.Vector data. World Cylindrical Equal Area. Shapefile format.

Search
Clear search
Close search
Google apps
Main menu