40 datasets found
  1. d

    Tutorial for NetCDF climate data retrieval and model integration

    • dataone.org
    • hydroshare.org
    • +2more
    Updated Dec 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christina Bandaragoda; Jimmy Phuong (2021). Tutorial for NetCDF climate data retrieval and model integration [Dataset]. https://dataone.org/datasets/sha256%3A01e446404092bdcebd82469ba4ad3653a87530cde60581284d1eb36d28dd42b2
    Explore at:
    Dataset updated
    Dec 5, 2021
    Dataset provided by
    Hydroshare
    Authors
    Christina Bandaragoda; Jimmy Phuong
    Description

    Hydrological and meteorological information can help inform the conditions and risk factors related to the environment and their inhabitants. Due to the limitations of observation sampling, gridded data sets provide the modeled information for areas where data collection are infeasible using observations collected and known process relations. Although available, data users are faced with barriers to use, challenges like how to access, acquire, then analyze data for small watershed areas, when these datasets were produced for large, continental scale processes. In this tutorial, we introduce Observatory for Gridded Hydrometeorology (OGH) to resolve such hurdles in a use-case that incorporates NetCDF gridded data sets processes developed to interpret the findings and apply secondary modeling frameworks (landlab).

    LEARNING OBJECTIVES - Familiarize with data management, metadata management, and analyses with gridded data - Inspecting and problem solving with Python libraries - Explore data architecture and processes - Learn about OGH Python Library - Discuss conceptual data engineering and science operations

    Use-case operations: 1. Prepare computing environment 2. Get list of grid cells 3. NetCDF retrieval and clipping to a spatial extent 4. Extract NetCDF metadata and convert NetCDFs to 1D ASCII time-series files 5. Visualize the average monthly total precipitations 6. Apply summary values as modeling inputs 7. Visualize modeling outputs 8. Save results in a new HydroShare resource

    For inquiries, issues, or contribute to the developments, please refer to https://github.com/freshwater-initiative/Observatory

  2. f

    Characteristic parameters extracted from the Jarkus dataset using the Jarkus...

    • figshare.com
    • data.4tu.nl
    zip
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christa van IJzendoorn (2023). Characteristic parameters extracted from the Jarkus dataset using the Jarkus Analysis Toolbox [Dataset]. http://doi.org/10.4121/14514213.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    4TU.ResearchData
    Authors
    Christa van IJzendoorn
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    This dataset presents the output of the application of the Jarkus Analysis Toolbox (JAT) to the Jarkus dataset. The Jarkus dataset is one of the most elaborate coastal datasets in the world and consists of coastal profiles of the entire Dutch coast, spaced about 250-500 m apart, which have been measured yearly since 1965. Different available definitions for extracting characteristic parameters from coastal profiles were collected and implemented in the JAT. The characteristic parameters allow stakeholders (e.g. scientists, engineers and coastal managers) to study the spatial and temporal variations in parameters like dune height, dune volume, dune foot, beach width and closure depth. This dataset includes a netcdf file (on the opendap server, see data link) that contains all characteristic parameters through space and time, and a distribution plot that shows the overview of each characteristic parameters. The Jarkus Analysis Toolbox and all scripts that were used to extract the characteristic parameters and create the distribution plots are available through Github (https://github.com/christavanijzendoorn/JAT). Example 5 that is included in the JAT provides a python script that shows how to load and work with the netcdf file.Documentation: https://jarkus-analysis-toolbox.readthedocs.io/.

  3. d

    (HS 2) Automate Workflows using Jupyter notebook to create Large Extent...

    • search.dataone.org
    • hydroshare.org
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Young-Don Choi (2024). (HS 2) Automate Workflows using Jupyter notebook to create Large Extent Spatial Datasets [Dataset]. http://doi.org/10.4211/hs.a52df87347ef47c388d9633925cde9ad
    Explore at:
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    Hydroshare
    Authors
    Young-Don Choi
    Description

    We implemented automated workflows using Jupyter notebooks for each state. The GIS processing, crucial for merging, extracting, and projecting GeoTIFF data, was performed using ArcPy—a Python package for geographic data analysis, conversion, and management within ArcGIS (Toms, 2015). After generating state-scale LES (large extent spatial) datasets in GeoTIFF format, we utilized the xarray and rioxarray Python packages to convert GeoTIFF to NetCDF. Xarray is a Python package to work with multi-dimensional arrays and rioxarray is rasterio xarray extension. Rasterio is a Python library to read and write GeoTIFF and other raster formats. Xarray facilitated data manipulation and metadata addition in the NetCDF file, while rioxarray was used to save GeoTIFF as NetCDF. These procedures resulted in the creation of three HydroShare resources (HS 3, HS 4 and HS 5) for sharing state-scale LES datasets. Notably, due to licensing constraints with ArcGIS Pro, a commercial GIS software, the Jupyter notebook development was undertaken on a Windows OS.

  4. t

    ESA CCI SM GAPFILLED Long-term Climate Data Record of Surface Soil Moisture...

    • researchdata.tuwien.ac.at
    zip
    Updated Sep 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wolfgang Preimesberger; Wolfgang Preimesberger; Pietro Stradiotti; Pietro Stradiotti; Wouter Arnoud Dorigo; Wouter Arnoud Dorigo (2025). ESA CCI SM GAPFILLED Long-term Climate Data Record of Surface Soil Moisture from merged multi-satellite observations [Dataset]. http://doi.org/10.48436/3fcxr-cde10
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 5, 2025
    Dataset provided by
    TU Wien
    Authors
    Wolfgang Preimesberger; Wolfgang Preimesberger; Pietro Stradiotti; Pietro Stradiotti; Wouter Arnoud Dorigo; Wouter Arnoud Dorigo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    This dataset was produced with funding from the European Space Agency (ESA) Climate Change Initiative (CCI) Plus Soil Moisture Project (CCN 3 to ESRIN Contract No: 4000126684/19/I-NB "ESA CCI+ Phase 1 New R&D on CCI ECVS Soil Moisture"). Project website: https://climate.esa.int/en/projects/soil-moisture/

    This dataset contains information on the Surface Soil Moisture (SM) content derived from satellite observations in the microwave domain.

    Dataset Paper (Open Access)

    A description of this dataset, including the methodology and validation results, is available at:

    Preimesberger, W., Stradiotti, P., and Dorigo, W.: ESA CCI Soil Moisture GAPFILLED: an independent global gap-free satellite climate data record with uncertainty estimates, Earth Syst. Sci. Data, 17, 4305–4329, https://doi.org/10.5194/essd-17-4305-2025, 2025.

    Abstract

    ESA CCI Soil Moisture is a multi-satellite climate data record that consists of harmonized, daily observations coming from 19 satellites (as of v09.1) operating in the microwave domain. The wealth of satellite information, particularly over the last decade, facilitates the creation of a data record with the highest possible data consistency and coverage.
    However, data gaps are still found in the record. This is particularly notable in earlier periods when a limited number of satellites were in operation, but can also arise from various retrieval issues, such as frozen soils, dense vegetation, and radio frequency interference (RFI). These data gaps present a challenge for many users, as they have the potential to obscure relevant events within a study area or are incompatible with (machine learning) software that often relies on gap-free inputs.
    Since the requirement of a gap-free ESA CCI SM product was identified, various studies have demonstrated the suitability of different statistical methods to achieve this goal. A fundamental feature of such gap-filling method is to rely only on the original observational record, without need for ancillary variable or model-based information. Due to the intrinsic challenge, there was until present no global, long-term univariate gap-filled product available. In this version of the record, data gaps due to missing satellite overpasses and invalid measurements are filled using the Discrete Cosine Transform (DCT) Penalized Least Squares (PLS) algorithm (Garcia, 2010). A linear interpolation is applied over periods of (potentially) frozen soils with little to no variability in (frozen) soil moisture content. Uncertainty estimates are based on models calibrated in experiments to fill satellite-like gaps introduced to GLDAS Noah reanalysis soil moisture (Rodell et al., 2004), and consider the gap size and local vegetation conditions as parameters that affect the gapfilling performance.

    Summary

    • Gap-filled global estimates of volumetric surface soil moisture from 1991-2023 at 0.25° sampling
    • Fields of application (partial): climate variability and change, land-atmosphere interactions, global biogeochemical cycles and ecology, hydrological and land surface modelling, drought applications, and meteorology
    • Method: Modified version of DCT-PLS (Garcia, 2010) interpolation/smoothing algorithm, linear interpolation over periods of frozen soils. Uncertainty estimates are provided for all data points.
    • More information: See Preimesberger et al. (2025) and https://doi.org/10.5281/zenodo.8320869" target="_blank" rel="noopener">ESA CCI SM Algorithm Theoretical Baseline Document [Chapter 7.2.9] (Dorigo et al., 2023)

    Programmatic Download

    You can use command line tools such as wget or curl to download (and extract) data for multiple years. The following command will download and extract the complete data set to the local directory ~/Download on Linux or macOS systems.

    #!/bin/bash

    # Set download directory
    DOWNLOAD_DIR=~/Downloads

    base_url="https://researchdata.tuwien.at/records/3fcxr-cde10/files"

    # Loop through years 1991 to 2023 and download & extract data
    for year in {1991..2023}; do
    echo "Downloading $year.zip..."
    wget -q -P "$DOWNLOAD_DIR" "$base_url/$year.zip"
    unzip -o "$DOWNLOAD_DIR/$year.zip" -d $DOWNLOAD_DIR
    rm "$DOWNLOAD_DIR/$year.zip"
    done

    Data details

    The dataset provides global daily estimates for the 1991-2023 period at 0.25° (~25 km) horizontal grid resolution. Daily images are grouped by year (YYYY), each subdirectory containing one netCDF image file for a specific day (DD), month (MM) in a 2-dimensional (longitude, latitude) grid system (CRS: WGS84). The file name has the following convention:

    ESACCI-SOILMOISTURE-L3S-SSMV-COMBINED_GAPFILLED-YYYYMMDD000000-fv09.1r1.nc

    Data Variables

    Each netCDF file contains 3 coordinate variables (WGS84 longitude, latitude and time stamp), as well as the following data variables:

    • sm: (float) The Soil Moisture variable reflects estimates of daily average volumetric soil moisture content (m3/m3) in the soil surface layer (~0-5 cm) over a whole grid cell (0.25 degree).
    • sm_uncertainty: (float) The Soil Moisture Uncertainty variable reflects the uncertainty (random error) of the original satellite observations and of the predictions used to fill observation data gaps.
    • sm_anomaly: Soil moisture anomalies (reference period 1991-2020) derived from the gap-filled values (`sm`)
    • sm_smoothed: Contains DCT-PLS predictions used to fill data gaps in the original soil moisture field. These values are also provided for cases where an observation was initially available (compare `gapmask`). In this case, they provided a smoothed version of the original data.
    • gapmask: (0 | 1) Indicates grid cells where a satellite observation is available (1), and where the interpolated (smoothed) values are used instead (0) in the 'sm' field.
    • frozenmask: (0 | 1) Indicates grid cells where ERA5 soil temperature is <0 °C. In this case, a linear interpolation over time is applied.

    Additional information for each variable is given in the netCDF attributes.

    Version Changelog

    Changes in v9.1r1 (previous version was v09.1):

    • This version uses a novel uncertainty estimation scheme as described in Preimesberger et al. (2025).

    Software to open netCDF files

    These data can be read by any software that supports Climate and Forecast (CF) conform metadata standards for netCDF files, such as:

    References

    • Preimesberger, W., Stradiotti, P., and Dorigo, W.: ESA CCI Soil Moisture GAPFILLED: an independent global gap-free satellite climate data record with uncertainty estimates, Earth Syst. Sci. Data, 17, 4305–4329, https://doi.org/10.5194/essd-17-4305-2025, 2025.
    • Dorigo, W., Preimesberger, W., Stradiotti, P., Kidd, R., van der Schalie, R., van der Vliet, M., Rodriguez-Fernandez, N., Madelon, R., & Baghdadi, N. (2023). ESA Climate Change Initiative Plus - Soil Moisture Algorithm Theoretical Baseline Document (ATBD) Supporting Product Version 08.1 (version 1.1). Zenodo. https://doi.org/10.5281/zenodo.8320869
    • Garcia, D., 2010. Robust smoothing of gridded data in one and higher dimensions with missing values. Computational Statistics & Data Analysis, 54(4), pp.1167-1178. Available at: https://doi.org/10.1016/j.csda.2009.09.020
    • Rodell, M., Houser, P. R., Jambor, U., Gottschalck, J., Mitchell, K., Meng, C.-J., Arsenault, K., Cosgrove, B., Radakovich, J., Bosilovich, M., Entin, J. K., Walker, J. P., Lohmann, D., and Toll, D.: The Global Land Data Assimilation System, Bulletin of the American Meteorological Society, 85, 381 – 394, https://doi.org/10.1175/BAMS-85-3-381, 2004.

    Related Records

    The following records are all part of the ESA CCI Soil Moisture science data records community

    1

    ESA CCI SM MODELFREE Surface Soil Moisture Record

    <a href="https://doi.org/10.48436/svr1r-27j77" target="_blank"

  5. d

    Data from: Multi-task Deep Learning for Water Temperature and Streamflow...

    • catalog.data.gov
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Multi-task Deep Learning for Water Temperature and Streamflow Prediction (ver. 1.1, June 2022) [Dataset]. https://catalog.data.gov/dataset/multi-task-deep-learning-for-water-temperature-and-streamflow-prediction-ver-1-1-june-2022
    Explore at:
    Dataset updated
    Sep 30, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This item contains data and code used in experiments that produced the results for Sadler et. al (2022) (see below for full reference). We ran five experiments for the analysis, Experiment A, Experiment B, Experiment C, Experiment D, and Experiment AuxIn. Experiment A tested multi-task learning for predicting streamflow with 25 years of training data and using a different model for each of 101 sites. Experiment B tested multi-task learning for predicting streamflow with 25 years of training data and using a single model for all 101 sites. Experiment C tested multi-task learning for predicting streamflow with just 2 years of training data. Experiment D tested multi-task learning for predicting water temperature with over 25 years of training data. Experiment AuxIn used water temperature as an input variable for predicting streamflow. These experiments and their results are described in detail in the WRR paper. Data from a total of 101 sites across the US was used for the experiments. The model input data and streamflow data were from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) dataset (Newman et. al 2014, Addor et. al 2017). The water temperature data were gathered from the National Water Information System (NWIS) (U.S. Geological Survey, 2016). The contents of this item are broken into 13 files or groups of files aggregated into zip files:

    1. input_data_processing.zip: A zip file containing the scripts used to collate the observations, input weather drivers, and catchment attributes for the multi-task modeling experiments
    2. flow_observations.zip: A zip file containing collated daily streamflow data for the sites used in multi-task modeling experiments. The streamflow data were originally accessed from the CAMELs dataset. The data are stored in csv and Zarr formats.
    3. temperature_observations.zip: A zip file containing collated daily water temperature data for the sites used in multi-task modeling experiments. The data were originally accessed via NWIS. The data are stored in csv and Zarr formats.
    4. temperature_sites.geojson: Geojson file of the locations of the water temperature and streamflow sites used in the analysis.
    5. model_drivers.zip: A zip file containing the daily input weather driver data for the multi-task deep learning models. These data are from the Daymet drivers and were collated from the CAMELS dataset. The data are stored in csv and Zarr formats.
    6. catchment_attrs.csv: Catchment attributes collatted from the CAMELS dataset. These data are used for the Random Forest modeling. For full metadata regarding these data see CAMELS dataset.
    7. experiment_workflow_files.zip: A zip file containing workflow definitions used to run multi-task deep learning experiments. These are Snakemake workflows. To run a given experiment, one would run (for experiment A) 'snakemake -s expA_Snakefile --configfile expA_config.yml'
    8. river-dl-paper_v0.zip: A zip file containing python code used to run multi-task deep learning experiments. This code was called by the Snakemake workflows contained in 'experiment_workflow_files.zip'.
    9. random_forest_scripts.zip: A zip file containing Python code and a Python Jupyter Notebook used to prepare data for, train, and visualize feature importance of a Random Forest model.
    10. plotting_code.zip: A zip file containing python code and Snakemake workflow used to produce figures showing the results of multi-task deep learning experiments.
    11. results.zip: A zip file containing results of multi-task deep learning experiments. The results are stored in csv and netcdf formats. The netcdf files were used by the plotting libraries in 'plotting_code.zip'. These files are for five experiments, 'A', 'B', 'C', 'D', and 'AuxIn'. These experiment names are shown in the file name.
    12. sample_scripts.zip: A zip file containing scripts for creating sample output to demonstrate how the modeling workflow was executed.
    13. sample_output.zip: A zip file containing sample output data. Similar files are created by running the sample scripts provided.
    A. Newman; K. Sampson; M. P. Clark; A. Bock; R. J. Viger; D. Blodgett, 2014. A large-sample watershed-scale hydrometeorological dataset for the contiguous USA. Boulder, CO: UCAR/NCAR. https://dx.doi.org/10.5065/D6MW2F4D

    N. Addor, A. Newman, M. Mizukami, and M. P. Clark, 2017. Catchment attributes for large-sample studies. Boulder, CO: UCAR/NCAR. https://doi.org/10.5065/D6G73C3Q

    Sadler, J. M., Appling, A. P., Read, J. S., Oliver, S. K., Jia, X., Zwart, J. A., & Kumar, V. (2022). Multi-Task Deep Learning of Daily Streamflow and Water Temperature. Water Resources Research, 58(4), e2021WR030138. https://doi.org/10.1029/2021WR030138

    U.S. Geological Survey, 2016, National Water Information System data available on the World Wide Web (USGS Water Data for the Nation), accessed Dec. 2020.

  6. The Transition from Bedload to Complex Granular Dynamics on Steep Slopes: A...

    • zenodo.org
    nc, txt
    Updated Apr 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Islam KOA; Islam KOA; alain recking; alain recking; Florent Gimbert; Florent Gimbert; Hervé Bellot; Hervé Bellot; Guillaume Chambon; Guillaume Chambon; Thierry FAUG; Thierry FAUG (2025). The Transition from Bedload to Complex Granular Dynamics on Steep Slopes: A Force Balance Perspective [Dataset]. http://doi.org/10.5281/zenodo.15187911
    Explore at:
    nc, txtAvailable download formats
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Islam KOA; Islam KOA; alain recking; alain recking; Florent Gimbert; Florent Gimbert; Hervé Bellot; Hervé Bellot; Guillaume Chambon; Guillaume Chambon; Thierry FAUG; Thierry FAUG
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Title:
    Flume Experiment Dataset – Granular Flow Tests (2023)

    Authors:
    I. Koa, A. Recking, F. Gimbert, H. Bellot, G. Chambon, T. Faug

    Contact:
    islamkoaa111@gmail.com

    Description:
    This dataset contains NetCDF (.nc) files from controlled flume experiments conducted in 2023 to study the transition from bedload to complex granular flow dynamics on steep slopes. Each file name encodes the experiment date and test number (e.g., CanalMU-20-04-2023-test5.nc = Test 5 on April 20, 2023).

    Each test corresponds to a specific discharge (Q) value, detailed in the table below.

    Example filename:
    CanalMU-20-04-2023-test5.nc → Test 5 conducted on April 20, 2023.

    Discharge Table:

    Discharge (l/s) | Date | Test Number
    ----------------|-------------|-------------
    0.14 | 06-04-2023 | Test 3
    0.14 | 04-05-2023 | Test 5
    0.15 | 13-04-2023 | Test 3
    0.15 | 14-04-2023 | Test 1
    0.15 | 14-04-2023 | Test 2
    0.16 | 17-04-2023 | Test 2
    0.16 | 18-04-2023 | Test 3
    0.16 | 04-05-2023 | Test 3
    0.16 | 04-05-2023 | Test 4
    0.17 | 18-04-2023 | Test 4
    0.17 | 18-04-2023 | Test 5
    0.17 | 20-04-2023 | Test 2
    0.17 | 20-04-2023 | Test 4
    0.17 | 20-04-2023 | Test 5
    0.18 | 20-04-2023 | Test 8
    0.18 | 20-04-2023 | Test 9
    0.19 | 20-04-2023 | Test 10
    0.19 | 20-04-2023 | Test 11
    0.20 | 20-04-2023 | Test 12
    0.20 | 04-05-2023 | Test 1
    0.20 | 04-05-2023 | Test 2
    0.21 | 20-04-2023 | Test 13
    0.21 | 21-04-2023 | Test 1
    0.21 | 21-04-2023 | Test 2
    0.22 | 21-04-2023 | Test 3
    0.22 | 21-04-2023 | Test 4
    0.23 | 21-04-2023 | Test 5
    0.23 | 27-04-2023 | Test 2
    0.23 | 27-04-2023 | Test 3
    0.23 | 28-04-2023 | Test 7
    0.24 | 28-04-2023 | Test 1
    0.24 | 28-04-2023 | Test 2
    0.24 | 28-04-2023 | Test 3
    0.25 | 28-04-2023 | Test 4
    0.25 | 21-06-2023 | Test 1
    0.26 | 28-04-2023 | Test 6
    0.26 | 21-06-2023 | Test 3
    0.26 | 21-06-2023 | Test 4
    0.27 | 22-06-2023 | Test 2
    0.27 | 22-06-2023 | Test 3
    0.27 | 22-06-2023 | Test 1

    Data Acquisition and Processing:
    The original data were acquired using LabVIEW and saved in TDMS (.tdms) format. These files were processed using custom Python scripts to extract synchronized time-series data, assign physical units, and store the results in structured NetCDF-4 files.

    NetCDF File Structure:
    Each file includes the following structured groups and variables:

    1. Group: Data_Hydro (Hydraulic Measurements)
    - Time_Hydro: Time [s]
    - Date_et_heure_mesure: Measurement timestamps [string]
    - Etat_de_l'interrupteur: Switch state [V]
    - Debit_liquide_instant: Instantaneous water discharge [L/s]
    - Debit_liquide_consigne: Target water discharge [L/s]
    - Vitesse_tapis_instant: Instantaneous conveyor speed [m/s]
    - Vitesse_tapis_consigne: Set conveyor speed [V]
    - Debit_solide_instant: Instantaneous solid discharge [g/s]
    - Hauteur1–4: Water heights from four sensors [cm]

    2. Group: Data_Force (Impact Force Measurements)
    - Time_Force: Time [s]
    - Force_Normale: Vertical impact force [N]
    - Force_Tangentielle: Tangential force [N]

    3. Group: Data_Annexe (Experimental Metadata)
    - channel_width, Channel_slope: Flume geometry
    - Position_capteur_hauteur1–4: Water sensor locations [m]
    - Position_capteur_force: Force sensor position [m]
    - Plaque dimensions and mass: Plate size and weight [m, kg]
    - Sensor frequencies and sensitivities [Hz, pC/N]

    Format:
    NetCDF-4 (.nc)

    Suggested software for reading:
    - Python (xarray, netCDF4)
    - NASA Panoply
    - MATLAB

    Note:
    The data were processed using custom Python scripts. These are available from the corresponding author upon request.

    Example: Accessing NetCDF Data in Python

    The dataset can be read using the `netCDF4` or `xarray` libraries in Python. Below is a simple example using netCDF4:

    ```python
    from netCDF4 import Dataset
    import numpy as np

    # Open netCDF file
    data = Dataset('CanalMU-20-04-2023-test5.nc')

    # Load hydraulic data
    thydro = data.groups['Data_Hydro'].variables['Time_Hydro'][:]
    Qcons = data.groups['Data_Hydro'].variables['Debit_liquide_consigne'][:]
    Qins = data.groups['Data_Hydro'].variables['Debit_liquide_instant'][:]
    Tapis = data.groups['Data_Hydro'].variables['Vitesse_tapis_consigne'][:]
    h1 = data.groups['Data_Hydro'].variables['Hauteur1'][:]
    h2 = data.groups['Data_Hydro'].variables['Hauteur2'][:]
    h3 = data.groups['Data_Hydro'].variables['Hauteur3'][:]
    h4 = data.groups['Data_Hydro'].variables['Hauteur4'][:]

    # Load force data
    tforce = data.groups['Data_Force'].variables['Time_Force'][:]
    FN = data.groups['Data_Force'].variables['Force_Normale'][:]
    FT = data.groups['Data_Force'].variables['Force_Tangentielle'][:]

    # Apply calibration factors
    FN = FN
    FT = FT

    # Fetch metadata
    slope = data.groups['Data_Annexe'].variables['Channel_slope']
    alpha = np.arctan(slope[:]/100)
    L = data.groups['Data_Annexe'].variables['Longueur_plaque_impact'][:]
    W = data.groups['Data_Annexe'].variables['Largeur_plaque_impact'][:]

    ```

    For more advanced processing, consider using `xarray` which provides easier multi-dimensional data access.

  7. Model output and data used for analysis

    • catalog.data.gov
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). Model output and data used for analysis [Dataset]. https://catalog.data.gov/dataset/model-output-and-data-used-for-analysis
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    The modeled data in these archives are in the NetCDF format (https://www.unidata.ucar.edu/software/netcdf/). NetCDF (Network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. It is also a community standard for sharing scientific data. The Unidata Program Center supports and maintains netCDF programming interfaces for C, C++, Java, and Fortran. Programming interfaces are also available for Python, IDL, MATLAB, R, Ruby, and Perl. Data in netCDF format is: • Self-Describing. A netCDF file includes information about the data it contains. • Portable. A netCDF file can be accessed by computers with different ways of storing integers, characters, and floating-point numbers. • Scalable. Small subsets of large datasets in various formats may be accessed efficiently through netCDF interfaces, even from remote servers. • Appendable. Data may be appended to a properly structured netCDF file without copying the dataset or redefining its structure. • Sharable. One writer and multiple readers may simultaneously access the same netCDF file. • Archivable. Access to all earlier forms of netCDF data will be supported by current and future versions of the software. Pub_figures.tar.zip Contains the NCL scripts for figures 1-5 and Chesapeake Bay Airshed shapefile. The directory structure of the archive is ./Pub_figures/Fig#_data. Where # is the figure number from 1-5. EMISS.data.tar.zip This archive contains two NetCDF files that contain the emission totals for 2011ec and 2040ei emission inventories. The name of the files contain the year of the inventory and the file header contains a description of each variable and the variable units. EPIC.data.tar.zip contains the monthly mean EPIC data in NetCDF format for ammonium fertilizer application (files with ANH3 in the name) and soil ammonium concentration (files with NH3 in the name) for historical (Hist directory) and future (RCP-4.5 directory) simulations. WRF.data.tar.zip contains mean monthly and seasonal data from the 36km downscaled WRF simulations in the NetCDF format for the historical (Hist directory) and future (RCP-4.5 directory) simulations. CMAQ.data.tar.zip contains the mean monthly and seasonal data in NetCDF format from the 36km CMAQ simulations for the historical (Hist directory), future (RCP-4.5 directory) and future with historical emissions (RCP-4.5-hist-emiss directory). This dataset is associated with the following publication: Campbell, P., J. Bash, C. Nolte, T. Spero, E. Cooter, K. Hinson, and L. Linker. Projections of Atmospheric Nitrogen Deposition to the Chesapeake Bay Watershed. Journal of Geophysical Research - Biogeosciences. American Geophysical Union, Washington, DC, USA, 12(11): 3307-3326, (2019).

  8. Satellite Datasets used for MIRA Workflows

    • zenodo.org
    application/gzip +1
    Updated Aug 25, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge Guerra; Jorge Guerra; Vijay Mahadevan; Vijay Mahadevan (2021). Satellite Datasets used for MIRA Workflows [Dataset]. http://doi.org/10.5281/zenodo.5172792
    Explore at:
    text/x-python, application/gzipAvailable download formats
    Dataset updated
    Aug 25, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jorge Guerra; Jorge Guerra; Vijay Mahadevan; Vijay Mahadevan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CANGA Remapping Intercomparison Satellite Data Set

    Included is the data used to generate sampling fields. Included is the script for generating spatial power spectra and fitting later used in reconstruction over any unstructured spherical grid. This data set is included for reproducibility of results provided in a journal article submission.

    REQUIRES:

    1. http://code.google.com/p/netcdf4-python/ Python NetCDF IO modules: "pip install netcdf4"
    2. https://shtools.oca.eu/shtools/ Python spherical harmonic tools package: "pip install pyshtools"
    3. Numpy
    4. Scipy (KDTree search)
    5. https://plot.ly/python/ Plotly (Fancy, web-based plotting)
  9. Forcing files for the ECMWF Integrated Forecasting System (IFS) Single...

    • catalogue.ceda.ac.uk
    • data-search.nerc.ac.uk
    Updated Mar 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hannah M. Christensen; Andrew Dawson; Christopher Holloway (2020). Forcing files for the ECMWF Integrated Forecasting System (IFS) Single Column Model (SCM) over Indian Ocean/Tropical Pacific derived from a 10-day high resolution simulation [Dataset]. https://catalogue.ceda.ac.uk/uuid/bf4fb57ac7f9461db27dab77c8c97cf2
    Explore at:
    Dataset updated
    Mar 2, 2020
    Dataset provided by
    Centre for Environmental Data Analysishttp://www.ceda.ac.uk/
    Authors
    Hannah M. Christensen; Andrew Dawson; Christopher Holloway
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Time period covered
    Apr 6, 2009 - Apr 16, 2009
    Area covered
    Variables measured
    time, eastward_wind, northward_wind, surface_altitude, surface_temperature, surface_downward_latent_heat_flux, surface_downward_sensible_heat_flux, atmosphere hybrid sigma pressure coordinate
    Description

    This data set consisting of initial conditions, boundary conditions and forcing profiles for the Single Column Model (SCM) version of the European Centre for Medium-range Weather Forecasts (ECMWF) model, the Integrated Forecasting System (IFS). The IFS SCM is freely available through the OpenIFS project, on application to ECMWF for a licence. The data were produced and tested for IFS CY40R1, but will be suitable for earlier model cycles, and also for future versions assuming no new boundary fields are required by a later model. The data are archived as single time-stamp maps in netCDF files. If the data are extracted at any lat-lon location and the desired timestamps concatenated (e.g. using netCDF operators), the resultant file is in the correct format for input into the IFS SCM.

    The data covers the Tropical Indian Ocean/Warm Pool domain spanning 20S-20N, 42-181E. The data are available every 15 minutes from 6 April 2009 0100 UTC for a period of ten days. The total number of grid points over which an SCM can be run is 480 in the longitudinal direction, and 142 latitudinally. With over 68,000 independent grid points available for evaluation of SCM simulations, robust statistics of bias can be estimated over a wide range of boundary and climatic conditions.

    The initial conditions and forcing profiles were derived by coarse-graining high resolution (4 km) simulations produced as part of the NERC Cascade project, dataset ID xfhfc (also available on CEDA). The Cascade dataset is archived once an hour. The dataset was linearly interpolated in time to produce the 15-minute resolution required by the SCM. The resolution of the coarse-grained data corresponds to the IFS T639 reduced gaussian grid (approx 32 km). The boundary conditions are as used in the operational IFS at resolution T639. The coarse graining procedure by which the data were produced is detailed in Christensen, H. M., Dawson, A. and Holloway, C. E., 'Forcing Single Column Models using High-resolution Model Simulations', in review, Journal of Advances in Modeling Earth Systems (JAMES).

    For full details of the parent Cascade simulation, see Holloway et al (2012). In brief, the simulations were produced using the limited-area setup of the MetUM version 7.1 (Davies et al, 2005). The model is semi-Lagrangian and non-hydrostatic. Initial conditions were specified from the ECMWF operational analysis. A 12 km parametrised convection run was first produced over a domain 1 degree larger in each direction, with lateral boundary conditions relaxed to the ECMWF operational analysis. The 4 km run was forced using lateral boundary conditions computed from the 12 km parametrised run, via a nudged rim of 8 model grid points. The model has 70 terrain-following hybrid levels in the vertical, with vertical resolution ranging from tens of metres in the boundary layer, to 250 m in the free troposphere, and with model top at 40 km. The time step was 30 s.

    The Cascade dataset did not include archived soil variables, though surface sensible and latent heat fluxes were archived. When using the dataset, it is therefore recommended that the IFS land surface scheme be deactivated and the SCM forced using the surface fluxes instead. The first day of Cascade data exhibited evidence of spin-up. It is therefore recommended that the first day be discarded, and the data used from April 7 - April 16.

    The software used to produce this dataset are freely available to interested users; 1. "cg-cascade"; NCL software to produce OpenIFS forcing fields from a high-resolution MetUM simulation and necessary ECMWF boundary files. https://github.com/aopp-pred/cg-cascade Furthermore, software to facilitate the use of this dataset are also available, consisting of; 2. "scmtiles"; Python software to deploy many independent SCMs over a domain. https://github.com/aopp-pred/scmtiles 3. "openifs-scmtiles"; Python software to deploy the OpenIFS SCM using scmtiles. https://github.com/aopp-pred/openifs-scmtiles

  10. U

    CMAQ Grid Mask Files for 12km CONUS - US States and NOAA Climate Regions

    • dataverse-staging.rdmc.unc.edu
    • datasearch.gesis.org
    Updated Dec 12, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UNC Dataverse (2019). CMAQ Grid Mask Files for 12km CONUS - US States and NOAA Climate Regions [Dataset]. http://doi.org/10.15139/S3/XDYYB9
    Explore at:
    Dataset updated
    Dec 12, 2019
    Dataset provided by
    UNC Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Data Summary: US states grid mask file and NOAA climate regions grid mask file, both compatible with the 12US1 modeling grid domain. Note:The datasets are on a Google Drive. The metadata associated with this DOI contain the link to the Google Drive folder and instructions for downloading the data. These files can be used with CMAQ-ISAMv5.3 to track state- or region-specific emissions. See Chapter 11 and Appendix B.4 in the CMAQ User's Guide for further information on how to use the ISAM control file with GRIDMASK files. The files can also be used for state or region-specific scaling of emissions using the CMAQv5.3 DESID module. See the DESID Tutorial and Appendix B.4 in the CMAQ User's Guide for further information on how to use the Emission Control File to scale emissions in predetermined geographical areas. File Location and Download Instructions: Link to GRIDMASK files Link to README text file with information on how these files were created File Format: The grid mask are stored as netcdf formatted files using I/O API data structures (https://www.cmascenter.org/ioapi/). Information on the model projection and grid structure is contained in the header information of the netcdf file. The output files can be opened and manipulated using I/O API utilities (e.g. M3XTRACT, M3WNDW) or other software programs that can read and write netcdf formatted files (e.g. Fortran, R, Python). File descriptions These GRIDMASK files can be used with the 12US1 modeling grid domain (grid origin x = -2556000 m, y = -1728000 m; N columns = 459, N rows = 299). GRIDMASK_STATES_12US1.nc - This file containes 49 variables for the 48 states in the conterminous U.S. plus DC. Each state variable (e.g., AL, AZ, AR, etc.) is a 2D array (299 x 459) providing the fractional area of each grid cell that falls within that state. GRIDMASK_CLIMATE_REGIONS_12US1.nc - This file containes 9 variables for 9 NOAA climate regions based on the Karl and Koss (1984) definition of climate regions. Each climate region variable (e.g., CLIMATE_REGION_1, CLIMATE_REGION_2, etc.) is a 2D array (299 x 459) providing the fractional area of each grid cell that falls within that climate region. NOAA Climate regions: CLIMATE_REGION_1: Northwest (OR, WA, ID) CLIMATE_REGION_2: West (CA, NV) CLIMATE_REGION_3: West North Central (MT, WY, ND, SD, NE) CLIMATE_REGION_4: Southwest (UT, AZ, NM, CO) CLIMATE_REGION_5: South (KS, OK, TX, LA, AR, MS) CLIMATE_REGION_6: Central (MO, IL, IN, KY, TN, OH, WV) CLIMATE_REGION_7: East North Central (MN, IA, WI, MI) CLIMATE_REGION_8: Northeast (MD, DE, NJ, PA, NY, CT, RI, MA, VT, NH, ME) + Washington, D.C.* CLIMATE_REGION_9: Southeast (VA, NC, SC, GA, AL, GA) *Note that Washington, D.C. is not included in any of the climate regions on the website but was included with the “Northeast” region for the generation of this GRIDMASK file.

  11. d

    Data from: Tidal Energy Resource Characterization, Bottom Lander...

    • catalog.data.gov
    Updated Jan 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). Tidal Energy Resource Characterization, Bottom Lander Measurements, Cook Inlet, AK, 2021 [Dataset]. https://catalog.data.gov/dataset/tidal-energy-resource-characterization-bottom-lander-measurements-cook-inlet-ak-2021-7c225
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Area covered
    Cook Inlet
    Description

    These datasets are from tidal resource characterization measurements collected on the Terrasond High Energy Oceanographic Mooring (THEOM) from 1 July 2021 to 30 August 2021 (60 days) in Cook Inlet, Alaska. The lander was deployed at 60.7207031 N, 151.4294998 W in ~50 m of water. The dataset contains raw and processed data from the following two instruments: A Nortek Signature 500 kHz acoustic Doppler current profiler (ADCP). Data were recorded in 4 Hz in the beam coordinate system from all 5 beams. Processed data has been averaged into 5 minutes bins and converted to the East-North-Up (ENU) coordinate system. A Nortek Vector acoustic Doppler velocimeter (ADV). Data were recorded at 8 Hz in the beam coordinate system. Processed data has been averaged into 5 minutes bins and converted to the Streamwise - Cross-stream - Vertical (Principal) coordinate system. Turbulence statistics were calculated from 5-minute bins, with an FFT length equal to the bin length, and saved in the processed dataset. Data was read and analyzed using the DOLfYN (version 1.0.2) python package and saved in MATLAB (.mat) and netCDF (.nc) file formats. Files containing analyzed data (".b1") were standardized using the TSDAT (version 0.4.2) python package. NetCDF files can be opened using DOLfYN (e.g., dat = dolfyn.load(''*.nc")) or the xarray python package (e.g. `dat = xarray.open_dataset("*.nc"). All distances are in meters (e.g., depth, range, etc), and all velocities in m/s. See the DOLfYN documentation linked in the submission, and/or the Nortek documentation for additional details.

  12. d

    Data from: Calculated Leached Nitrogen from Septic Systems in Wisconsin,...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Oct 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Calculated Leached Nitrogen from Septic Systems in Wisconsin, 1850-2010 [Dataset]. https://catalog.data.gov/dataset/calculated-leached-nitrogen-from-septic-systems-in-wisconsin-1850-2010
    Explore at:
    Dataset updated
    Oct 22, 2025
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Wisconsin
    Description

    This data release contains a netCDF file containing decadal estimates of nitrate leached from septic systems (kilograms per hectare per year, or kg/ha) in the state of Wisconsin from 1850 to 2010, as well as the python code and supporting files used to create the netCDF file. The netCDF file is used as an input to a Nitrate Decision Support Tool for the State of Wisconsin (GW-NDST; Juckem and others, 2024). The dataset was constructed starting with 1990 census records, which included responses about households using septic systems for waste disposal. The fraction of population using septic systems in 1990 was aggregated at the county scale and applied backward in time for each decade from 1850 to 1980. For decades from 1990 to 2010, the fraction of population using septic systems was computed on the finer resolution census block-group scale. Each decadal estimate of the fraction of population using septic systems was then multiplied by 4.13 kilograms per person per year of leached nitrate to estimate the per-area load of nitrate below the root zone. The data release includes a python notebook used to process the input datasets included in the data release, shapefiles created (or modified) using the python notebook, and the final netCDF file.

  13. u

    UIUC Mobile Sounding Data

    • data.ucar.edu
    netcdf
    Updated Aug 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Janiszeski (2025). UIUC Mobile Sounding Data [Dataset]. http://doi.org/10.5065/D6X63KCG
    Explore at:
    netcdfAvailable download formats
    Dataset updated
    Aug 1, 2025
    Authors
    Andrew Janiszeski
    Time period covered
    Jan 8, 2017 - Mar 9, 2017
    Area covered
    Description

    This dataset contains data collected from 34 successful l University of Illinois Urbana-Champaign (UIUC) Mobile Radiosonde launches collected during the SNOWIE field campaign. Each successful launch, named by year-month-day-time-location.nc, has its own netCDF file. The data in each file includes: temperature (Celsius), relative humidity, time of sample (in seconds past launch time), height AGL (m), wind speed (m/s), wind direction (degrees), and pressure (mb) as measured by the radiosonde. The coordinates and altitude above MSL (m) corresponding to where each sounding was launched are written in the attributes of each file. All surface wind speed and direction is used from the previous hourly observation from KBOI for the Boise sites and KEUL for the Caldwell site. One exception was the launch on 16 February 2017 at Caldwell in which KBOI observations were used instead. Included with each dataset order is a python script (netcdfreadout.py) to easily view the netcdf data files.

  14. Z

    Estimates of Global Coastal Losses Under Multiple Sea Level Rise Scenarios

    • data.niaid.nih.gov
    • zenodo.org
    Updated Apr 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Choi, Jun Ho (2024). Estimates of Global Coastal Losses Under Multiple Sea Level Rise Scenarios [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6014085
    Explore at:
    Dataset updated
    Apr 3, 2024
    Dataset provided by
    Houser, Trevor
    Bolliger, Ian
    Delgado, Michael
    Allen, Daniel
    Kopp, Robert E.
    Hamidi, Ali
    Hsiang, Solomon
    Depsky, Nicholas
    Choi, Jun Ho
    Greenstone, Michael
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results from the Python Coastal Impacts and Adaptation Model (pyCIAM), the inputs and source code necessary to replicate these outputs, and the results presented in Depsky et al. 2023.

    All zipped Zarr stores can be downloaded and accessed locally or can be directly accessed via code similar to the following:

    from fsspec.implementations.zip import ZipFileSystem import xarray as xr xr.open_zarr(ZipFileSystem(url_of_file_in_record}}).get_mapper())

    File Inventory

    Products

    pyCIAM_outputs.zarr.zip: Outputs of the pyCIAM model, using the SLIIDERS dataset to define socioeconomic and extreme sea level characteristics of coastal regions and the 17th, 50th, and 83rd quantiles of local sea level rise as projected by various modeling frameworks (LocalizeSL and FACTS) and for multiple emissions scenarios and ice sheet models.

    pyCIAM_outputs_{case}.nc: A NetCDF version of pyCIAM_outputs, in which the netcdf files are divided up by adaptation "case" to reduce file size.

    diaz2016_outputs.zarr.zip: A replication of the results from Diaz 2016 - the model upon which pyCIAM was built, using an identical configuration to that of the original model.

    suboptimal_capital_by_movefactor.zarr.zip: An analysis of the observed present-day allocation of capital compared to a "rational" allocation, as a function of the magnitude of non-market costs of relocation assumed in the model. See Depsky et al. 2023 for further details.

    Inputs

    ar5-msl-rel-2005-quantiles.zarr.zip: Quantiles of projected local sea level rise as projected from the LocalizeSL model, using a variety of temperature scenarios and ice sheet models developed in Kopp 2014, Bamber 2019, DeConto 2021, IPCC SROCC. The results contained in pyCIAM_outputs.zarr.zip cover a broader (and newer) range of SLR projections from a more recent projection framework (FACTS); however, these data are more easily obtained from the appropriate Zenodo records and thus are not hosted in this one.

    diaz2016_inputs_raw.zarr.zip: The coastal inputs used in Diaz 2016, obtained from GitHub and formatted for use in the Python-based pyCIAM. These are based on the Dynamic Integrated Vulnerability Assessment (DIVA) dataset.

    surge-lookup-seg(_adm).zarr.zip: Pre-computed lookup tables estimating average annual losses from extreme sea levels due to mortality and capital stock damage. This is an intermediate output of pyCIAM and is not necessary to replicate the model results. However, it is more time consuming to produce than the rest of the model and is provided for users who may wish to start from the pre-computed dataset. Two versions are provided - the first contains estimates for each unique intersection of ~50km coastal segment and state/province-level administrative unit (admin-1). This is derived from the characteristics in SLIIDERS. The second is simply estimated on a version of SLIIDERS collapsed over administrative units to vary only over coastal segments. Both are used in the process of running pyCIAM.

    ypk_2000_2100.zarr.zip: An intermediate output in creating SLIIDERS that contains country-level projections of GDP, capital stock, and population, based on the Shared Socioeconomic Pathways (SSPs). This is only used in normalizing costs estimated in pyCIAM by country and global GDP to report in Depsky et al. 2023. It is not used in the execution of pyCIAM but is provided to replicate results reported in the manuscript.

    Source Code

    pyCIAM.zip: Contains the python-CIAM package as well as a notebook-based workflow to replicate the results presented in Depsky et al. 2023. It also contains two master shell scripts (run_example.sh and run_full_replication.sh) to assist in executing a small sample of the pyCIAM model or in fully executing the workflow of Depsky et al. 2023, respectively. This code is consistent with release 1.2.0 in the pyCIAM GitHub repository and is available as version 1.2.0 of the python-CIAM package on PyPI.

    Version history:

    1.2

    Point data-acquisition.ipynb to updated Zenodo deposit that fixes the dtype of subsets variable in diaz2016_inputs_raw.zarr.zip to be bool rather than int8

    Variable name bugfix in data-acquisition.ipynb

    Add netcdf versions of SLIIDERS and the pyCIAM results to upload-zenodo.ipynb

    Update results in Zenodo record to use SLIIDERS v1.2

    1.1.1

    Bugfix to inputs/diaz2016_inputs_raw.zarr.zip to make the subsets variable bool instead of int8.

    1.1.0

    Version associated with publication of Depsky et al., 2023

  15. s

    Data from: Dataset for "The impact of lake shape and size on lake breezes...

    • research.science.eus
    • ekoizpen-zientifikoa.ehu.eus
    Updated 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chatain, Audrey; Rafkin, Scot C.R.; Soto, Alejandro; Moisan, Enora; Lora, Juan M.; Le Gall, Alice; Hueso, Ricardo; Spiga, Aymeric; Chatain, Audrey; Rafkin, Scot C.R.; Soto, Alejandro; Moisan, Enora; Lora, Juan M.; Le Gall, Alice; Hueso, Ricardo; Spiga, Aymeric (2023). Dataset for "The impact of lake shape and size on lake breezes and air-lake exchanges on Titan" [Dataset]. https://research.science.eus/documentos/67321dfcaea56d4af048502a
    Explore at:
    Dataset updated
    2023
    Authors
    Chatain, Audrey; Rafkin, Scot C.R.; Soto, Alejandro; Moisan, Enora; Lora, Juan M.; Le Gall, Alice; Hueso, Ricardo; Spiga, Aymeric; Chatain, Audrey; Rafkin, Scot C.R.; Soto, Alejandro; Moisan, Enora; Lora, Juan M.; Le Gall, Alice; Hueso, Ricardo; Spiga, Aymeric
    Description

    Code and data presented in the paper "The impact of lake shape and size on lake breezes and air-lake exchanges on Titan", published in Icarus in 2024 (https://doi.org/10.1016/j.icarus.2023.115925).

    Are made available:

    -the Fortran source code of the model initialization module modified for this simulation work,"module_initialize_Titan_lakebreeze3d_xy_shoreline.F"

    -the input files used to run the simulations,in "inputs/"

    -a list of the simulations done and the netCDF outputs,"simus_done_for_paper3D.pdf""run-##_y0_tsol4.nc.gz" --> slices at a given y (at the center), 4th tsol [for 2D and 3D simulations]"run-##_y0_tsol3.nc.gz" --> slices at a given y (at the center), 3rd tsol [for 2D and 3D simulations]"run-##_z0_tsol4.nc.gz" --> slices at a given z (at the surface), 4th tsol [only for 3D simulations]"run-##_z200_tsol4.nc.gz" --> slices at a given z (at ~200 m), 4th tsol [only for 3D simulations]"run-##_tsol4_2am.nc.gz" --> total simulation output at a given time (2am on 4th tsol) [only for 3D simulations]"run-##_tsol4_2pm.nc.gz" --> total simulation output at a given time (2pm on 4th tsol) [only for 3D simulations]

    -the Python codes to plot figures from the netCDF output files,in "postprocessing_python/""mtwrf_analysis_1D_t.py" --> plot variables with time at given (x,y,z) [for 2D and 3D simulations]"mtwrf_analysis_2D_xt.py" --> plot variables with (x,t) at given (y,z) [for 2D and 3D simulations]"mtwrf_analysis_2D_xz.py" --> plot variables with (x,z) at given (y,t) [for 2D and 3D simulations]"mtwrf_analysis_2D_xy.py" --> plot variables with (x,y) at given (z,t) [only for 3D simulations]-- same as the previous ones but to plot along a different x-axis (rotated fron the one of the netCDF files) [only for 3D simulations]"mtwrf_analysis_1D_t_diagonal.py""mtwrf_analysis_2D_xt_diagonal.py""mtwrf_analysis_2D_xz_diagonal.py"

    -the Matlab variables and figures from the analysis of the simulated lake breeze dimensionsin "postprocessing_matlab/"

  16. MOVIES3D example dataset

    • seanoe.org
    bin
    Updated Mar 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mathieu Doray; Erwan Duhamel; Florence Sanchez; Laurent Berger (2022). MOVIES3D example dataset [Dataset]. http://doi.org/10.17882/58652
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 2, 2022
    Dataset provided by
    SEANOE
    Authors
    Mathieu Doray; Erwan Duhamel; Florence Sanchez; Laurent Berger
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Time period covered
    Apr 27, 2013
    Area covered
    Description

    this dataset presents fisheries acoustic data in both proprietary simrad raw format and international hac format recorded onboard r/v thalassa on 28/04/2013 between 14:56 and 15:16 gmt near the continental shelf edge in southern bay of biscay. data include typical small pelagic fish schools composed of anchovy and sardine encountered in springtime in this area.the dataset has also been converted to international sonar-netcdf4 format described at : https://github.com/ices-publications/sonar-netcdf4hac files can be displayed and processed using e.g. the movies3d freeware provided by ifremer at: http://flotte.ifremer.fr/fleet/presentation-of-the-fleet/logiciels-embarques/moviessonar-netcdf4 files can be displayed using standard netcdf viewers and python notebooks available at : https://gitlab.ifremer.fr/fleet/formats/pysonar-netcdf

  17. t

    Study data for "Accounting for seasonal retrieval errors in the merging of...

    • researchdata.tuwien.ac.at
    zip
    Updated Aug 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pietro Stradiotti; Pietro Stradiotti; Alexander Gruber; Alexander Gruber; Wolfgang Preimesberger; Wolfgang Preimesberger; Wouter Arnoud Dorigo; Wouter Arnoud Dorigo (2025). Study data for "Accounting for seasonal retrieval errors in the merging of multi-sensor satellite soil moisture products" [Dataset]. http://doi.org/10.48436/z0zzp-f4j39
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 25, 2025
    Dataset provided by
    TU Wien
    Authors
    Pietro Stradiotti; Pietro Stradiotti; Alexander Gruber; Alexander Gruber; Wolfgang Preimesberger; Wolfgang Preimesberger; Wouter Arnoud Dorigo; Wouter Arnoud Dorigo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data repository contains the accompanying data for the study by Stradiotti et al. (2025). Developed as part of the ESA Climate Change Initiative (CCI) Soil Moisture project. Project website: https://climate.esa.int/en/projects/soil-moisture/

    Journal Article (Open Access)

    This dataset was created as part of the following study, which contains a description of the algorithm and validation results.

    Stradiotti, P., Gruber, A., Preimesberger, W., & Dorigo, W. (2025). Accounting for seasonal retrieval errors in the merging of multi-sensor satellite soil moisture products. Science of Remote Sensing, 12, 100242. https://doi.org/10.1016/j.srs.2025.100242

    Summary

    This repository contains the final, merged soil moisture and uncertainty values from Stradiotti et al. (2025), derived using a novel uncertainty quantification and merging scheme. In the accompanying study, we present a method to quantify the seasonal component of satellite soil moisture observations, based on Triple Collocation Analysis. Data from three independent satellite missions are used (from ASCAT, AMSR2, and SMAP). We observe consistent intra-annual variations in measurement uncertainties across all products (primarily caused by dynamics on the land surface such as seasonal vegetation changes), which affect the quality of the received signals. We then use these estimates to merge data from the three missions into a single consistent record, following the approach described by Dorigo et al. (2017). The new (seasonal) uncertainty estimates are propagated through the merging scheme, to enhance the uncertainty characterization of the final merged product provided here.

    Evaluation against in situ data suggests that the estimated uncertainties of the new product are more representative of their true seasonal behaviour, compared to the previously used static approach. Based on these findings, we conclude that using a seasonal TCA approach can provide a more realistic characterization of dataset uncertainty, in particular its temporal variation. However, improvements in the merged soil moisture values are constrained, primarily due to correlated uncertainties among the sensors.

    Technical details

    The dataset provides global daily gridded soil moisture estimates for the 2012-2023 period at 0.25° (~25 km) resolution. Daily images are grouped by year (YYYY), each subdirectory containing one netCDF image file for a specific day (DD), month (MM) in a 2-dimensional (longitude, latitude) grid system (CRS: WGS84). All file names follow the naming convention:

    L3S-SSMS-MERGED-SOILMOISTURE-YYYYMMDD000000-fv0.1.nc

    Data Variables

    Each netCDF file contains 3 coordinate variables (WGS84 longitude, latitude and time stamp), as well as the following data variables:

    • sm: (float) The Soil Moisture variable contains the daily average volumetric soil moisture content (m3/m3) in the soil surface layer (~0-5 cm) over a whole grid cell (0.25 degree). Based on (merged) observations from ASCAT, AMSR2 and SMAP using the new merging scheme described in our study.
    • sm_uncertainty: (float) The Soil Moisture Uncertainty variable contains the uncertainty estimates (random error) for the ‘sm’ field. Based on the uncertainty estimation and propagation scheme described in our study.
    • dnflag: (int) Indicator for satellite orbit(s) used in the retrieval (day/nighttime). 1=day, 2=night, 3=both
    • flag: (int) Indicator for data quality / missing data indicator. For more details, see netcdf attributes.
    • freqbandID: (int) Indicator for frequency band(s) used in the retrieval. For more details, see netcdf attributes.
    • mode: (int) Indicator for satellite orbit(s) used in the retrieval (ascending, descending)
    • sensor: (int) Indicator for satellite sensor(s) used in the retrieval. For more details, see netcdf attributes.
    • t0: (float) Representative time stamp, based on overpass times of all merged satellites.

    Software to open netCDF files

    After extracting the .nc files from the downloaded zip archived, they can read by any software that supports Climate and Forecast (CF) standard conform netCDF files, such as:

    • Xarray (python)
    • netCDF4 (python)
    • esa_cci_sm (python)
    • Similar tools exists for other programming languages (Matlab, R, etc.)
    • GIS and netCDF tools such as CDO, NCO, QGIS, ArCGIS.
    • You can also use the GUI software Panoply to view the contents of each file

    Funding

    This dataset was produced with funding from the European Space Agency (ESA) Climate Change Initiative (CCI) Plus Soil Moisture Project (CCN 3 to ESRIN Contract No: 4000126684/19/I-NB "ESA CCI+ Phase 1 New R&D on CCI ECVS Soil Moisture"). Project website: https://climate.esa.int/en/projects/soil-moisture/

  18. ERA-NUTS: time-series based on C3S ERA5 for European regions

    • zenodo.org
    nc, zip
    Updated Aug 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M. De Felice; M. De Felice; K. Kavvadias; K. Kavvadias (2022). ERA-NUTS: time-series based on C3S ERA5 for European regions [Dataset]. http://doi.org/10.5281/zenodo.2650191
    Explore at:
    zip, ncAvailable download formats
    Dataset updated
    Aug 4, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    M. De Felice; M. De Felice; K. Kavvadias; K. Kavvadias
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    # ERA-NUTS (1980-2018)

    This dataset contains a set of time-series of meteorological variables based on Copernicus Climate Change Service (C3S) ERA5 reanalysis. The data files can be downloaded from here while notebooks and other files can be found on the associated Github repository.

    This data has been generated with the aim of providing hourly time-series of the meteorological variables commonly used for power system modelling and, more in general, studies on energy systems.

    An example of the analysis that can be performed with ERA-NUTS is shown in this video.

    Important: this dataset is still a work-in-progress, we will add more analysis and variables in the near-future. If you spot an error or something strange in the data please tell us sending an email or opening an Issue in the associated Github repository.

    ## Data
    The time-series have hourly/daily/monthly frequency and are aggregated following the NUTS 2016 classification. NUTS (Nomenclature of Territorial Units for Statistics) is a European Union standard for referencing the subdivisions of countries (member states, candidate countries and EFTA countries).

    This dataset contains NUTS0/1/2 time-series for the following variables obtained from the ERA5 reanalysis data (in brackets the name of the variable on the Copernicus Data Store and its unit measure):

    - t2m: 2-meter temperature (`2m_temperature`, Celsius degrees)
    - ssrd: Surface solar radiation (`surface_solar_radiation_downwards`, Watt per square meter)
    - ssrdc: Surface solar radiation clear-sky (`surface_solar_radiation_downward_clear_sky`, Watt per square meter)
    - ro: Runoff (`runoff`, millimeters)

    There are also a set of derived variables:
    - ws10: Wind speed at 10 meters (derived by `10m_u_component_of_wind` and `10m_v_component_of_wind`, meters per second)
    - ws100: Wind speed at 100 meters (derived by `100m_u_component_of_wind` and `100m_v_component_of_wind`, meters per second)
    - CS: Clear-Sky index (the ratio between the solar radiation and the solar radiation clear-sky)
    - HDD/CDD: Heating/Cooling Degree days (derived by 2-meter temperature the EUROSTAT definition.

    For each variable we have 350 599 hourly samples (from 01-01-1980 00:00:00 to 31-12-2019 23:00:00) for 34/115/309 regions (NUTS 0/1/2).

    The data is provided in two formats:

    - NetCDF version 4 (all the variables hourly and CDD/HDD daily). NOTE: the variables are stored as `int16` type using a `scale_factor` of 0.01 to minimise the size of the files.
    - Comma Separated Value ("single index" format for all the variables and the time frequencies and "stacked" only for daily and monthly)

    All the CSV files are stored in a zipped file for each variable.

    ## Methodology

    The time-series have been generated using the following workflow:

    1. The NetCDF files are downloaded from the Copernicus Data Store from the ERA5 hourly data on single levels from 1979 to present dataset
    2. The data is read in R with the climate4r packages and aggregated using the function `/get_ts_from_shp` from panas. All the variables are aggregated at the NUTS boundaries using the average except for the runoff, which consists of the sum of all the grid points within the regional/national borders.
    3. The derived variables (wind speed, CDD/HDD, clear-sky) are computed and all the CSV files are generated using R
    4. The NetCDF are created using `xarray` in Python 3.7.

    NOTE: air temperature, solar radiation, runoff and wind speed hourly data have been rounded with two decimal digits.

    ## Example notebooks

    In the folder `notebooks` on the associated Github repository there are two Jupyter notebooks which shows how to deal effectively with the NetCDF data in `xarray` and how to visualise them in several ways by using matplotlib or the enlopy package.

    There are currently two notebooks:

    - exploring-ERA-NUTS: it shows how to open the NetCDF files (with Dask), how to manipulate and visualise them.
    - ERA-NUTS-explore-with-widget: explorer interactively the datasets with [jupyter]() and ipywidgets.

    The notebook `exploring-ERA-NUTS` is also available rendered as HTML.

    ## Additional files

    In the folder `additional files`on the associated Github repository there is a map showing the spatial resolution of the ERA5 reanalysis and a CSV file specifying the number of grid points with respect to each NUTS0/1/2 region.

    ## License

    This dataset is released under CC-BY-4.0 license.

  19. Northern elephant seal tracking and diving – raw and curated data

    • data.niaid.nih.gov
    • datasetcatalog.nlm.nih.gov
    • +1more
    zip
    Updated May 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Costa; Rachel Holser; Theresa Keates; Taiki Adachi; Roxanne Beltran; Cory Champagne; Crocker Daniel; Arina Favilla; Melinda Fowler; Juan Pablo Gallo-Reynoso; Chandra Goetsch; Jason Hassrick; Luis Hückstädt; Jessica Kendall-Bar; Sarah Kienle; Carey Kuhn; Jennifer Maresh; Sara Maxwell; Birgitte McDonald; Elizabeth McHuron; Patricia Morris; Yasuhiko Naito; Logan Pallin; Sarah Peterson; Patrick Robinson; Samantha Simmons; Akinori Takahashi; Nicole Teuschel; Michael Tift; Yann Tremblay; Stella Villegas-Amtman; Ken Yoda (2025). Northern elephant seal tracking and diving – raw and curated data [Dataset]. http://doi.org/10.7291/D10D61
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 14, 2025
    Dataset provided by
    Scripps Institution of Oceanography
    Springfield College
    University of Washington
    ICF International (United States)
    Moss Landing Marine Laboratories
    West Chester University
    Nagoya University
    Consolidated Safety Services-Dynamac (United States)
    Centro de Investigación en Alimentación y Desarrollo
    University of California, Santa Cruz
    NOAA National Marine Fisheries Service
    University of Exeter
    Sonoma State University
    University of North Carolina Wilmington
    Baylor University
    University of St Andrews
    United States Geological Survey
    National Institute of Polar Research
    Marine Biodiversity Exploitation and Conservation
    Authors
    Daniel Costa; Rachel Holser; Theresa Keates; Taiki Adachi; Roxanne Beltran; Cory Champagne; Crocker Daniel; Arina Favilla; Melinda Fowler; Juan Pablo Gallo-Reynoso; Chandra Goetsch; Jason Hassrick; Luis Hückstädt; Jessica Kendall-Bar; Sarah Kienle; Carey Kuhn; Jennifer Maresh; Sara Maxwell; Birgitte McDonald; Elizabeth McHuron; Patricia Morris; Yasuhiko Naito; Logan Pallin; Sarah Peterson; Patrick Robinson; Samantha Simmons; Akinori Takahashi; Nicole Teuschel; Michael Tift; Yann Tremblay; Stella Villegas-Amtman; Ken Yoda
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Northern elephant seals (Mirounga angustirostris) have been integral to the development and progress of biologging technology and movement data analysis. Adult female elephant seals at Año Nuevo State Park and other colonies along the west coast of North America were tracked annually from 2004 to 2020 for a total of 653 instrument deployments and 561 recoveries. These high-resolution diving and location data have been compiled, curated, and processed. This repository has netCDF files containing the raw tracking and diving data. The processed data are available in a second repository (https://doi.org/10.7291/D18D7W). Methods These data were collected from biotelemetry devices attached to adult female northern elephant seals (Mirounga angustirostris) from 2004 to 2020. The instruments collected locations (Argos and/or GPS) and continuously recorded depth throughout the animals' trips. Data were processed in MATLAB and R using custom code, the IKNOS package for dive data processing, and the aniMotum package for track processing. The details of data collection and processing are documented in the data descriptor paper associated with this dataset. In addition, all code used to process the data are available on GitHub and Zenodo.

    The data presented here are freely available for use under the CC0 (Creative Commons Zero), and attribution is encouraged to be given to the data descriptor (DOI: 10.1038/s41597-024-04084-4) and this Dryad repository. We encourage users to reach out to the data owner for richer insight into the dataset. Subsets of this dataset have been made available through other projects and data portals and we caution users that these are not independent northern elephant seal datasets. This includes the AniBOS/MEOP data portal (https://www.meop.net/database/meop-databases/), the Animal Tracking Network (ATN) (https://portal.atn.ioos.us/), Movebank (https://www.movebank.org/cms/movebank-main), and MegaMove (https://megamove.org/data-portal/).

    Additional data about the instrumented animals, such as morphometrics, demographics, and other biologging data (e.g., acceleration, jaw motion, temperature), are available for many of these animals but are beyond the scope of this dataset. For more information, contact the author at rholser@ucsc.edu.

    Sampling Biases

    Generally, we have been careful to select healthy animals for sedation and instrumentation. For animals deployed at Año Nuevo (most of the tracks), typically individuals with known site fidelity to the colony were selected and if age was known it was usually restricted to 4- to 12-year-olds. Furthermore, the data reported here span two decades of work. During this time, different studies prompted additional non-random population sampling. Examples include focusing on one age for a year, repeat tracking the same individuals two trips in a row, and intentionally selecting previously tracked females who had used a coastal foraging strategy. Many individuals in the dataset have been tracked multiple times. We strongly encourage researchers to evaluate the metadata provided carefully and contact the author with inquiries at rholser@ucsc.edu.

    Code Availability

    All the code written for data processing and NetCDF data import code for MATLAB, R, and Python are available at GitHub (https://github.com/rholser/NES_TrackDive_DataProcessing) and Zenodo (https://doi.org/10.5281/zenodo.12511548). Extensive documentation of functions and scripts is also provided there. In addition, the authors have provided code in Python, R, and MATLAB for basic access to the netCDF files (https://github.com/rholser/NES-Read-netCDF). They should serve as a model to enable users unfamiliar with the format to access the data.

  20. Z

    Data from: A Deep Learning-Based Hybrid Model of Global Terrestrial...

    • data.niaid.nih.gov
    • data.europa.eu
    Updated Jan 21, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Koppa, Akash; Rains, Dominik; Hulsman, Petra; Poyatos, Rafael; Miralles, Diego G. (2022). A Deep Learning-Based Hybrid Model of Global Terrestrial Evaporation [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5220752
    Explore at:
    Dataset updated
    Jan 21, 2022
    Dataset provided by
    Hydro-Climate Extremes Lab (H-CEL), Ghent University
    CREAF, E08193 Bellaterra (Cerdanyola del Vallès), Catalonia, Spain
    Authors
    Koppa, Akash; Rains, Dominik; Hulsman, Petra; Poyatos, Rafael; Miralles, Diego G.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the datasets used in the research article "A Deep Learning-Based Hybrid Model of Global Terrestrial Evaporation".

    The repository contains the following files: 1) Input - contains all the processed input used for training the deep learning models and the datasets used for creating the figures in the article. 2) Output - contains the final deep learning models and the outputs (evaporation and transpiration stress factor) outputs from the hybrid model developed in the study.

    Formats: All scripts are in the programming language Python. The datasets are in HDF5 and NetCDF file formats.

    The codes related to the research article and deep learning model are available in the following repository: https://github.com/akashkoppa/StressNet

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Christina Bandaragoda; Jimmy Phuong (2021). Tutorial for NetCDF climate data retrieval and model integration [Dataset]. https://dataone.org/datasets/sha256%3A01e446404092bdcebd82469ba4ad3653a87530cde60581284d1eb36d28dd42b2

Tutorial for NetCDF climate data retrieval and model integration

Explore at:
Dataset updated
Dec 5, 2021
Dataset provided by
Hydroshare
Authors
Christina Bandaragoda; Jimmy Phuong
Description

Hydrological and meteorological information can help inform the conditions and risk factors related to the environment and their inhabitants. Due to the limitations of observation sampling, gridded data sets provide the modeled information for areas where data collection are infeasible using observations collected and known process relations. Although available, data users are faced with barriers to use, challenges like how to access, acquire, then analyze data for small watershed areas, when these datasets were produced for large, continental scale processes. In this tutorial, we introduce Observatory for Gridded Hydrometeorology (OGH) to resolve such hurdles in a use-case that incorporates NetCDF gridded data sets processes developed to interpret the findings and apply secondary modeling frameworks (landlab).

LEARNING OBJECTIVES - Familiarize with data management, metadata management, and analyses with gridded data - Inspecting and problem solving with Python libraries - Explore data architecture and processes - Learn about OGH Python Library - Discuss conceptual data engineering and science operations

Use-case operations: 1. Prepare computing environment 2. Get list of grid cells 3. NetCDF retrieval and clipping to a spatial extent 4. Extract NetCDF metadata and convert NetCDFs to 1D ASCII time-series files 5. Visualize the average monthly total precipitations 6. Apply summary values as modeling inputs 7. Visualize modeling outputs 8. Save results in a new HydroShare resource

For inquiries, issues, or contribute to the developments, please refer to https://github.com/freshwater-initiative/Observatory

Search
Clear search
Close search
Google apps
Main menu