28 datasets found
  1. TNO DGM5/VELMOD31 UTM31 xarray datasets

    • zenodo.org
    bin
    Updated Dec 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dirk Kraaijpoel; Dirk Kraaijpoel (2023). TNO DGM5/VELMOD31 UTM31 xarray datasets [Dataset]. http://doi.org/10.5281/zenodo.10425411
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 22, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dirk Kraaijpoel; Dirk Kraaijpoel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contains the DGM5 geological model and the VELMOD 3.1 velocity model as xarray datasets in UTM31 coordinates.

    Original data:

    Details DGM-diep V5 | NLOG

    Velmod-3.1 | NLOG

    Format:

    Xarray documentation

  2. Z

    Dataset for the article: Robotic Feet Modeled After Ungulates Improve...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Godon, S (2024). Dataset for the article: Robotic Feet Modeled After Ungulates Improve Locomotion on Soft Wet Grounds [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12673096
    Explore at:
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Ristolainen, A
    Godon, S
    Kruusmaa, M
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains data for three different experiments presented in the paper:

    (1) moose_feet (40 files): The moose leg experiments are labeled as ax_y.nc,

    where 'a' indicates attached digits and 'f' indicates free digits. The

    number 'x' is either 1 (front leg) or 2 (hind leg), and the number 'y'

    is an increment from 0 to 9 representing the 10 samples of each set.

    (2) synthetic_feet (120 files): The synthetic feet experiments are labeled

    as lw_a_y.nc, where 'lw' (Low Water content) can be replaced by 'mw'

    (Medium Water content) or 'vw' (Vast Water content). The 'a' can be 'o'

    (Original Go1 foot), 'r' (Rigid extended foot), 'f' (Free digits anisotropic

    foot), or 'a' (Attached digits). Similar to (1), the last number is an increment from 0 to 9.

    (3) Go1 (15 files): The locomotion experiments of the quadruped robot on the

    track are labeled as condition_y.nc, where 'condition' is either 'hard_ground'

    for experiments on hard ground, 'bioinspired_feet' for the locomotion of the

    quadruped on mud using bio-inspired anisotropic feet, or 'original_feet' for

    experiments where the robot used the original Go1 feet. The 'y' is an increment from 0 to 4.

    The files for moose_feet and synthetic_feet contain timestamp (s), position (m), and force (N) data.

    The files for Go1 contain timestamp (s), position (rad), velocity (rad/s), torque (Nm) data for all 12 motors, and the distance traveled by the robot (m).

    All files can be read using xarray datasets (https://docs.xarray.dev/en/stable/generated/xarray.Dataset.html).

  3. Front Toolbox benchmark data

    • zenodo.org
    zip
    Updated Jun 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clément Haëck; Clément Haëck (2025). Front Toolbox benchmark data [Dataset]. http://doi.org/10.5281/zenodo.15769618
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 29, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Clément Haëck; Clément Haëck
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Fronts-toolbox is a collection of tools to detect oceanic fronts in Python.

    Some front-detection algorithms are complex and thus may perform poorly when written directly in Python.
    This library provides a framework of Numba accelerated functions that can be applied easily to Numpy arrays, Dask arrays, or Xarray data.
    It could also support Cuda arrays if necessary.
    This makes creating and modifying those functions easier (especially for non-specialists) than if they were written in Fortran or C extensions.

    The data in this repository is to be used to test and showcase the various algorithms.

  4. s

    Pydata/Xarray: V0.9.1

    • eprints.soton.ac.uk
    Updated Sep 24, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas (2019). Pydata/Xarray: V0.9.1 [Dataset]. http://doi.org/10.5281/zenodo.264282
    Explore at:
    Dataset updated
    Sep 24, 2019
    Dataset provided by
    Zenodo
    Authors
    Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas
    Description

    Renamed the "Unindexed dimensions" section in the Dataset and DataArray repr (added in v0.9.0) to "Dimensions without coordinates".

  5. Revisiting ε Eridani with NEID: Line Parameter Data Cube

    • zenodo.org
    bin, nc
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Jiang; Sarah Jiang (2023). Revisiting ε Eridani with NEID: Line Parameter Data Cube [Dataset]. http://doi.org/10.5281/zenodo.10085919
    Explore at:
    nc, binAvailable download formats
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sarah Jiang; Sarah Jiang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the data cube (an xarray DataArray) used in Jiang et al. 2023 Revisiting ε Eridani with NEID: Identifying New Activity-Sensitive Lines in a Young K Dwarf Star (in press). The cube contains all line parameters (centroid, depth, FWHM, and integrated flux) for each line in the compiled line list over 32 NEID observations of ε Eridani spanning a six-month period from September 2021 to February 2022, as well as the measured RV and activity indices for each observation. For information on how the line parameters are measured, see the paper.

  6. E

    SUPERSEDED - CARDAMOM driving data and C-cycle model outputs to accompany...

    • dtechtive.com
    • find.data.gov.scot
    pdf, txt, zip
    Updated Aug 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Global Change Institute, School of GeoSciences, University of Edinburgh (2022). SUPERSEDED - CARDAMOM driving data and C-cycle model outputs to accompany 'Resolving scale-variance in the carbon dynamics of fragmented, mixed-use landscapes estimated using Model-Data Fusion' [Dataset]. http://doi.org/10.7488/ds/3509
    Explore at:
    zip(378.1 MB), txt(0.0166 MB), pdf(0.496 MB), zip(277 MB)Available download formats
    Dataset updated
    Aug 23, 2022
    Dataset provided by
    Global Change Institute, School of GeoSciences, University of Edinburgh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    UNITED KINGDOM
    Description

    '## This item has been replaced by the one which can be found at https://datashare.ed.ac.uk/handle/10283/4849 - https://doi.org/10.7488/ds/3843 ##' This archive contains the driving data and selected model outputs to accompany the manuscript: 'Resolving scale-variance in the carbon dynamics of fragmented, mixed-use landscapes estimated using Model-Data Fusion', submitted to Biogeosciences Discussions. The archive contains two zip files containing: (i) the observations and driving data assimilated into CARDAMOM; and (ii) a selection of model output, including the carbon (C) stocks for each DALEC pool, and a compilation of key C fluxes. Data and model output are stored as netcdf files. The xarray package (https://docs.xarray.dev/en/stable/index.html) provides a convenient starting point for using netcdf files within python environments. More details are provided in the document 'Milodowski_etal_dataset_description.pdf'

  7. d

    (HS 2) Automate Workflows using Jupyter notebook to create Large Extent...

    • search.dataone.org
    • hydroshare.org
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Young-Don Choi (2024). (HS 2) Automate Workflows using Jupyter notebook to create Large Extent Spatial Datasets [Dataset]. http://doi.org/10.4211/hs.a52df87347ef47c388d9633925cde9ad
    Explore at:
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    Hydroshare
    Authors
    Young-Don Choi
    Description

    We implemented automated workflows using Jupyter notebooks for each state. The GIS processing, crucial for merging, extracting, and projecting GeoTIFF data, was performed using ArcPy—a Python package for geographic data analysis, conversion, and management within ArcGIS (Toms, 2015). After generating state-scale LES (large extent spatial) datasets in GeoTIFF format, we utilized the xarray and rioxarray Python packages to convert GeoTIFF to NetCDF. Xarray is a Python package to work with multi-dimensional arrays and rioxarray is rasterio xarray extension. Rasterio is a Python library to read and write GeoTIFF and other raster formats. Xarray facilitated data manipulation and metadata addition in the NetCDF file, while rioxarray was used to save GeoTIFF as NetCDF. These procedures resulted in the creation of three HydroShare resources (HS 3, HS 4 and HS 5) for sharing state-scale LES datasets. Notably, due to licensing constraints with ArcGIS Pro, a commercial GIS software, the Jupyter notebook development was undertaken on a Windows OS.

  8. Zebrafish with lyz:EGFP expressing neutrophils: Mesh Well Inserts Z-stack 1

    • zenodo.org
    nc
    Updated Aug 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Efromson; John Efromson (2023). Zebrafish with lyz:EGFP expressing neutrophils: Mesh Well Inserts Z-stack 1 [Dataset]. http://doi.org/10.5281/zenodo.8035205
    Explore at:
    ncAvailable download formats
    Dataset updated
    Aug 17, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    John Efromson; John Efromson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    96-well plate z-stack of zebrafish with lyz:EGFP expressing neutrophils acquired with a multi-camera array microscope (MCAM)(Ramona Optics Inc., Durham, NC, USA). Mesh well inserts are used and half of the zebrafish on the plate were injected with csf3r morpholino. The overall z-stack is broken into four files.

    HDF5 files can be opened using open source Python software: https://docs.xarray.dev/

  9. Z

    Storage and Transit Time Data and Code

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Felton (2024). Storage and Transit Time Data and Code [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8136816
    Explore at:
    Dataset updated
    Jun 12, 2024
    Dataset authored and provided by
    Andrew Felton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Author: Andrew J. FeltonDate: 5/5/2024

    This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis and figure production for the study entitled:

    "Global estimates of the storage and transit time of water through vegetation"

    Please note that 'turnover' and 'transit' are used interchangeably in this project.

    Data information:

    The data folder contains key data sets used for analysis. In particular:

    "data/turnover_from_python/updated/annual/multi_year_average/average_annual_turnover.nc" contains a global array summarizing five year (2016-2020) averages of annual transit, storage, canopy transpiration, and number of months of data. This is the core dataset for the analysis; however, each folder has much more data, including a dataset for each year of the analysis. Data are also available is separate .csv files for each land cover type. Oterh data can be found for the minimum, monthly, and seasonal transit time found in their respective folders. These data were produced using the python code found in the "supporting_code" folder given the ease of working with .nc and EASE grid in the xarray python module. R was used primarily for data visualization purposes. The remaining files in the "data" and "data/supporting_data"" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here.

    Code information

    Python scripts can be found in the "supporting_code" folder.

    Each R script in this project has a particular function:

    01_start.R: This script loads the R packages used in the analysis, sets thedirectory, and imports custom functions for the project. You can also load in the main transit time (turnover) datasets here using the source() function.

    02_functions.R: This script contains the custom function for this analysis, primarily to work with importing the seasonal transit data. Load this using the source() function in the 01_start.R script.

    03_generate_data.R: This script is not necessary to run and is primarilyfor documentation. The main role of this code was to import and wranglethe data needed to calculate ground-based estimates of aboveground water storage.

    04_annual_turnover_storage_import.R: This script imports the annual turnover andstorage data for each landcover type. You load in these data from the 01_start.R scriptusing the source() function.

    05_minimum_turnover_storage_import.R: This script imports the minimum turnover andstorage data for each landcover type. Minimum is defined as the lowest monthlyestimate.You load in these data from the 01_start.R scriptusing the source() function.

    06_figures_tables.R: This is the main workhouse for figure/table production and supporting analyses. This script generates the key figures and summary statistics used in the study that then get saved in the manuscript_figures folder. Note that allmaps were produced using Python code found in the "supporting_code"" folder.

  10. ASTE Test Data

    • figshare.com
    application/x-gzip
    Updated Oct 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Timothy Smith (2020). ASTE Test Data [Dataset]. http://doi.org/10.6084/m9.figshare.13150859.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Oct 27, 2020
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Timothy Smith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Test data for ASTE Release 1 integration with ECCOv4-py.

  11. Zebrafish with lyz:EGFP expressing neutrophils: csf3r_MO injected stack

    • zenodo.org
    nc
    Updated Aug 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Efromson; John Efromson (2023). Zebrafish with lyz:EGFP expressing neutrophils: csf3r_MO injected stack [Dataset]. http://doi.org/10.5281/zenodo.8035102
    Explore at:
    ncAvailable download formats
    Dataset updated
    Aug 17, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    John Efromson; John Efromson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    96-well plate z-stack of zebrafish with lyz:EGFP expressing neutrophils acquired with a multi-camera array microscope (MCAM)(Ramona Optics Inc., Durham, NC, USA). Zebrafish larvae have been injected with csf3r morpholino.

    HDF5 files can be opened using open source Python software: https://docs.xarray.dev/

  12. d

    Materials for the CUAHSI's Workshops at the National Water Center Bootcamp...

    • search.dataone.org
    • hydroshare.org
    • +1more
    Updated Jul 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Irene Garousi-Nejad; Anthony M. Castronova (2024). Materials for the CUAHSI's Workshops at the National Water Center Bootcamp 2024 [Dataset]. https://search.dataone.org/view/sha256%3A2db498bba4a2d5c65a191c2366ab5a898ceaa7f70ddd45d032270ab7b015e368
    Explore at:
    Dataset updated
    Jul 13, 2024
    Dataset provided by
    Hydroshare
    Authors
    Irene Garousi-Nejad; Anthony M. Castronova
    Time period covered
    Jun 19, 2024 - Jun 21, 2024
    Description

    This resource includes materials for two workshops: (1) FAIR Data Management and (2) Advanced Application of Python for Hydrology and Scientific Storytelling, both prepared for presentation at the NWC Summer Institute BootCamp 2024.

  13. Z

    Sentinel-1 RTC imagery processed by ASF over central Himalaya in High...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Henderson, Scott (2022). Sentinel-1 RTC imagery processed by ASF over central Himalaya in High Mountain Asia [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7126242
    Explore at:
    Dataset updated
    Oct 28, 2022
    Dataset provided by
    Scheick, Jessica
    Marshall, Emma
    Cherian, Deepak
    Henderson, Scott
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    High-mountain Asia, Himalayas
    Description

    This is a dataset of Sentinel-1 radiometric terrain corrected (RTC) imagery processed by the Alaska Satellite Facility covering a region within the Central Himalaya. It accompanies a tutorial demonstrating accessing and working with Sentinel-1 RTC imagery using xarray and other open source python packages.

  14. f

    xesmf netcdf files for testing

    • figshare.com
    application/x-gzip
    Updated Feb 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raphael Dussin (2025). xesmf netcdf files for testing [Dataset]. http://doi.org/10.6084/m9.figshare.28378283.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Feb 9, 2025
    Dataset provided by
    figshare
    Authors
    Raphael Dussin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Testing files for the xesmf remapping package.

  15. H

    Collection of Materials for the CUAHSI's Workshops at the National Water...

    • hydroshare.org
    zip
    Updated Jun 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Irene Garousi-Nejad; Anthony M. Castronova (2025). Collection of Materials for the CUAHSI's Workshops at the National Water Center Bootcamp 2025 [Dataset]. https://www.hydroshare.org/resource/bae7841593ec40929a2f6cd6b5871c9c
    Explore at:
    zip(832 bytes)Available download formats
    Dataset updated
    Jun 10, 2025
    Dataset provided by
    HydroShare
    Authors
    Irene Garousi-Nejad; Anthony M. Castronova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This resource includes materials for two workshops: (1) FAIR data management and collaborating on simulation data in the cloud (2) Advanced application of Python for working with high value environmental datasets (3) Configuring and running a NextGen simulation and analyzing model outputs

  16. GloCE v1.0: Global CO2 Enhancement Dataset 2019-2023

    • zenodo.org
    bin
    Updated Apr 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yulun Zhou; Yulun Zhou; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster (2025). GloCE v1.0: Global CO2 Enhancement Dataset 2019-2023 [Dataset]. http://doi.org/10.5281/zenodo.15209825
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 22, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yulun Zhou; Yulun Zhou; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present a globally consistent, satellite-derived dataset of CO_2 enhancement (ΔXCO_2), quantifying the spatially resolved excess in atmospheric CO_2 concentrations as a collective consequence of anthropogenic emissions and terrestrial carbon uptake. This dataset is generated from the deviations of NASA's OCO-3 satellite retrievals comprising 54 million observations across more than 200 countries from 2019 to 2023.

    • The dataset is now encrypted and will be openly accessable once the article review process completes.
    • If you are eager for early access of the dataset, please email

    Dear reviewers, please download the datasets here and access using the password enclosed in the review documents. Many thanks!

    Data Descriptions -----------------------------------------

    • CO2_enhancement_global.nc contains all enhancement data globally.
    • CO2_enhancement_cities.nc contains all enhancement data in global urban areas.
    • Each data row contains the following columns:
    • Datasets are stored in netcdf files and can be accessed using the Python code below:

    # install pre-requests

    ! pip install netcdf4
    ! pip install h5netcdf

    # read co2 enhancement data
    import xarray as xr
    fn = './CO2_Enhancements_Global.nc'
    data = xr.open_dataset(fn)
    type(data)

    Please cite at least one of the following for any use of the CO2E dataset.

    Zhou, Y.*, Fan, P., Liu, J., Xu, Y., Huang, B., Webster, C. (2025). GloCE v1.0: Global CO2 Enhancement Dataset 2019-2023 [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15209825

    Fan, P., Liu, J., Xu, Y., Huang, B., Webster, C., & Zhou, Y*. (Under Review) A global dataset of CO2 enhancements during 2019-2023.

    For any data inquiries, please email Yulun Zhou at yulunzhou@hku.hk.

  17. f

    Training and test datasets used for building graph convolutional deep neural...

    • figshare.com
    hdf
    Updated Sep 5, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prakash Chandra Rathi; R. Frederick Ludlow; Marcel L. Verdonk (2019). Training and test datasets used for building graph convolutional deep neural network model for prediction molecular electrostatic surfaces [Dataset]. http://doi.org/10.6084/m9.figshare.9768071.v1
    Explore at:
    hdfAvailable download formats
    Dataset updated
    Sep 5, 2019
    Dataset provided by
    figshare
    Authors
    Prakash Chandra Rathi; R. Frederick Ludlow; Marcel L. Verdonk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The model build using these datasets can be found at https://github.com/AstexUK/ESP_DNN/tree/master/esp_dnnThe dataset themselves can be opened using xarray Python library (http://xarray.pydata.org/en/stable/#)

  18. Deep learning four decades of human migration: datasets

    • zenodo.org
    csv, nc
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Gaskin; Thomas Gaskin; Guy Abel; Guy Abel (2025). Deep learning four decades of human migration: datasets [Dataset]. http://doi.org/10.5281/zenodo.15778301
    Explore at:
    nc, csvAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Thomas Gaskin; Thomas Gaskin; Guy Abel; Guy Abel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Zenodo repository contains all migration flow estimates associated with the paper "Deep learning four decades of human migration." Evaluation code, training data, trained neural networks, and smaller flow datasets are available in the main GitHub repository, which also provides detailed instructions on data sourcing. Due to file size limits, the larger datasets are archived here.

    Data is available in both NetCDF (.nc) and CSV (.csv) formats. The NetCDF format is more compact and pre-indexed, making it suitable for large files. In Python, datasets can be opened as xarray.Dataset objects, enabling coordinate-based data selection.

    Each dataset uses the following coordinate conventions:

    • Year: 1990–2023
    • Birth ISO: Country of birth (UN ISO3)
    • Origin ISO: Country of origin (UN ISO3)
    • Destination ISO: Destination country (UN ISO3)
    • Country ISO: Used for net migration data (UN ISO3)

    The following data files are provided:

    • T.nc: Full table of flows disaggregated by country of birth. Dimensions: Year, Birth ISO, Origin ISO, Destination ISO
    • flows.nc: Total origin-destination flows (equivalent to T summed over Birth ISO). Dimensions: Year, Origin ISO, Destination ISO
    • net_migration.nc: Net migration data by country. Dimensions: Year, Country ISO
    • stocks.nc: Stock estimates for each country pair. Dimensions: Year, Origin ISO (corresponding to Birth ISO), Destination ISO
    • test_flows.nc: Flow estimates on a randomly selected set of test edges, used for model validation

    Additionally, two CSV files are provided for convenience:

    • mig_unilateral.csv: Unilateral migration estimates per country, comprising:
      • imm: Total immigration flows
      • emi: Total emigration flows
      • net: Net migration
      • imm_pop: Total immigrant population (non-native-born)
      • emi_pop: Total emigrant population (living abroad)
    • mig_bilateral.csv: Bilateral flow data, comprising:
      • mig_prev: Total origin-destination flows
      • mig_brth: Total birth-destination flows, where Origin ISO reflects place of birth

    Each dataset includes a mean variable (mean estimate) and a std variable (standard deviation of the estimate).

    An ISO3 conversion table is also provided.

  19. h

    next-day-wildfire-spread

    • huggingface.co
    Updated Aug 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrzej Szablewski (2024). next-day-wildfire-spread [Dataset]. https://huggingface.co/datasets/TheRootOf3/next-day-wildfire-spread
    Explore at:
    Dataset updated
    Aug 9, 2024
    Authors
    Andrzej Szablewski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Next Day Wildfire Spread Dataset

    This dataset is an xarray version of the original Next Day Wildfire Spread dataset. It comes in three splits: train, eval and test. Note: Given the original dataset does not contain spatio-temporal information, the xarray coordinates has been set to arbitrary ranges (0-63 for spatial dimensions and 0-number_of_samples for the temporal dimension).

      Example
    

    To open a train split of the dataset and show an elevation plot at time=2137:… See the full description on the dataset page: https://huggingface.co/datasets/TheRootOf3/next-day-wildfire-spread.

  20. IMMEC_dMFA_historic: Dataset and code for "Plastics in the German Building...

    • zenodo.org
    zip
    Updated Apr 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Schmidt; Sarah Schmidt; Xavier-François Verni; Xavier-François Verni; Thomas Gibon; Thomas Gibon; David Laner; David Laner (2025). IMMEC_dMFA_historic: Dataset and code for "Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination" [Dataset]. http://doi.org/10.5281/zenodo.15049210
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sarah Schmidt; Sarah Schmidt; Xavier-François Verni; Xavier-François Verni; Thomas Gibon; Thomas Gibon; David Laner; David Laner
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    1. Dataset Description

    This dataset provides simulated data on plastic and substance flows and stocks in buildings and infrastructure as described in the data article "Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination". Besides simulated data, the repository contains input data and model files used to produce the simulated data.

    Files Included

    Data & Data Visualization: The dataset contains input data and simulated data for the six main plastic applications in buildings and infrastructure in Germany in the period from 1950 to 2023, which are profiles, flooring, pipes, insulation material, cable insulations, and films. For each application the data are provided in a sub-directory (1_ ... 6_) following the structure described below.

    Input Data:
    The input data are stored in an xlsx-file with three sheets: flows, parameters, and data quality assessment. The data sources for all input data are detailed in the Supplementary Material of the linked Data in Brief article.

    Simulated Data:
    Simulated data are stored in a sub-folder, which contains:

    • Data visualization:
      • flows_and_stocks_by_product_type.png: Illustration of consumed products, in-use-stocks, and end-of-life flows, aggregated by product type (median values).
      • flows_and_stocks_by_polymer.png: Illustration of consumed products, in-use-stocks, and end-of-life flows, aggregated by polymer (median values).
      • flows_and_stocks_with_uncertainty.png: Illustration of consumed products, in-use-stocks, and end-of-life flows, aggregated by product (median values and 68% confidence interval).
      • contaminants_in_F3-4.png: Illustration of simulated legacy contaminant concentrations in consumed products (median values and 68% confidence interval).
      • contaminants_in_F4-5.png: Illustration of simulated legacy contaminant concentrations in end-of-life-flows (median values and 68% confidence interval).
    • Data:
      • simulated_data_[product].xlsx – Time series of flow and stock values, aggregated by product, type, polymer, and substance. Each data point includes:
        • Mean
        • Standard deviation
        • Median
        • 2.5%-quantile
        • 16%-quantile
        • 84%-quantile
        • 97.5%-quantile
      • MFA_model.pkl.gz – Model structure and input parameters, including:
      • Model classification – A dictionary summarizing the model structure {model_dimension: [items per model dimension]}
      • param_df – A dataframe containing input parameter values for each Monte Carlo run
      • outputmatrix.pkl.gz – Matrix of deterministic values
      • openlooprecycling.pkl – Xarray DataArray containing flow values of flow E7.1 for open-loop recycling (only available for sub-models that generate recycled plastics for open-loop recycling)
      • full_arrays-folder (contains non-aggregated data for all Monte Carlo runs):
        • flow_[flow_ID].pkl / stock_[stock_ID].pkl – Complete simulated flow and stock data.

    Note: All files in the [product]/simulated_data folder are automatically replaced with updated model results upon execution of immec_dmfa_calculate_submodels.py.

    To reduce storage requirements, data are stored in gzipped pickle files (.pkl.gz), while smaller files are provided as pickle files (.pkl). To open the files, users can use Python with the following code snippet:

    import gzip
    
    # Load a gzipped pickle file
    with gzip.open("filename.pkl.gz", "rb") as f:
      data = pickle.load(f)
    
    # Load a regular pickle file
    with open("filename.pkl", "rb") as f:
      data = pickle.load(f)

    Please note that opening pickle files requires compatible versions of numpy and pandas, as the files may have been created using version-specific data structures. If you encounter errors, ensure your package versions match those used during file creation (pandas: 2.2.3, numpy: 2.2.4).

    Simulated data are provided as Xarray datasets, a data structure designed for efficient handling, analysis, and visualization of multi-dimensional labeled data. For more details on using Xarray, please refer to the official documentation: https://docs.xarray.dev/en/stable/

    Core Model Files:

    • immec_dmfa_calculate_submodels.py – The primary model file, orchestrating the execution by calling functions from other files, running simulations, and storing results.
    • immec_dmfa_setup.py – Sets up the material flow model, imports all input data in the required format, and stores simulated data.
    • immec_dmfa_calculations.py – Implements mass balance equations and stock modeling equations to solve the model.
    • immec_dmfa_visualization.py – Provides functions to visualize simulated flows, stocks, and substance concentrations.
    • requirements.txt – Lists the required Python packages for running the model.

    Computational Considerations:
    During model execution, large arrays are generated, requiring significant memory. To enable computation on standard computers, Monte Carlo simulations are split into multiple chunks:

    • The number of runs per chunk is specified for each submodel in model_aspects.xlsx.
    • The number of chunks is set in immec_dmfa_calculate_submodels.py.

    Dependencies
    The model relies on the ODYM framework. To run the model, ODYM must be downloaded from https://github.com/IndEcol/ODYM (S. Pauliuk, N. Heeren, ODYM — An open software framework for studying dynamic material systems: Principles, implementation, and data structures, Journal of Industrial Ecology 24 (2020) 446–458. https://doi.org/10.1111/jiec.12952.)

    7_Model_Structure:

    • model_aspects.xlsx: Overview of model items in each dimension of each sub-model
    • parameters.xlsx: Overview of model parameters
    • processes.xlsx: Overview of processes
    • flows.xlsx: Overview of flows (P_Start and P_End mark the process-ID of the source and target of each flow)
    • stocks.xlsx: Overview of stocks

    8_Additional_Data: This folder contains supplementary data used in the model, including substance concentrations, data quality assessment scores, open-loop recycling distributions, and lifetime distributions.

    • concentrations.xlsx – Substance concentrations in plastic products, provided as average, minimum, and maximum values.
    • pedigree.xlsx – Pedigree scores for data quality assessment, following the methodology described in: D. Laner, J. Feketitsch, H. Rechberger, J. Fellner (2016). A Novel Approach to Characterize Data Uncertainty in Material Flow Analysis and its Application to Plastics Flows in Austria. Journal of Industrial Ecology, 20, 1050–1063. https://doi.org/10.1111/jiec.12326.
    • open_loop_recycling.xlsx – Distribution of open-loop recycled plastics into other plastic applications in buildings and infrastructure.
    • Lifetime_Distributions
      • hibernation.xlsx – Assumed retention time of products in hibernating stocks.
      • lifetime_dict.pkl – Dictionary containing Weibull functions, used to determine the best fits for LifetimeInputs.xlsx.
      • LifetimeInputs.xlsx – Input data for identifying lifetime functions.
      • LifetimeParameters.xlsx – Derived lifetime parameters, used in dynamic stock modeling.
      • Lifetimes.ipynb – Jupyter Notebook containing code for identifying suitable lifetime distribution parameters

    2. Methodology

    The dataset was generated using a dynamic material flow analysis (dMFA) model. For a complete methodology description, refer to the Data in Brief article (add DOI).

    3. How to Cite This Dataset

    If you use this dataset, please cite: Schmidt, S., Verni, X.-F., Gibon, T., Laner, D. (2025). Dataset for: Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination, Zenodo. DOI: 10.5281/zenodo.15049210

    4. License & Access

    This dataset is licensed under CC BY-NC 4.0, permitting use, modification, and distribution for non-commercial purposes, provided that proper attribution is given.

    5. Contact Information

    For questions or further details, please contact:
    Sarah Schmidt
    Center for Resource Management and Solid Waste Engineering
    University of Kassel
    Email: sarah.schmidt@uni-kassel.de

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dirk Kraaijpoel; Dirk Kraaijpoel (2023). TNO DGM5/VELMOD31 UTM31 xarray datasets [Dataset]. http://doi.org/10.5281/zenodo.10425411
Organization logo

TNO DGM5/VELMOD31 UTM31 xarray datasets

Explore at:
binAvailable download formats
Dataset updated
Dec 22, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Dirk Kraaijpoel; Dirk Kraaijpoel
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Contains the DGM5 geological model and the VELMOD 3.1 velocity model as xarray datasets in UTM31 coordinates.

Original data:

Details DGM-diep V5 | NLOG

Velmod-3.1 | NLOG

Format:

Xarray documentation

Search
Clear search
Close search
Google apps
Main menu