18 datasets found
  1. TNO DGM5/VELMOD31 UTM31 xarray datasets

    • zenodo.org
    bin
    Updated Dec 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dirk Kraaijpoel; Dirk Kraaijpoel (2023). TNO DGM5/VELMOD31 UTM31 xarray datasets [Dataset]. http://doi.org/10.5281/zenodo.10425411
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 22, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dirk Kraaijpoel; Dirk Kraaijpoel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contains the DGM5 geological model and the VELMOD 3.1 velocity model as xarray datasets in UTM31 coordinates.

    Original data:

    Details DGM-diep V5 | NLOG

    Velmod-3.1 | NLOG

    Format:

    Xarray documentation

  2. Z

    Dataset for the article: Robotic Feet Modeled After Ungulates Improve...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Godon, S; Ristolainen, A; Kruusmaa, M (2024). Dataset for the article: Robotic Feet Modeled After Ungulates Improve Locomotion on Soft Wet Grounds [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12673096
    Explore at:
    Dataset updated
    Oct 29, 2024
    Authors
    Godon, S; Ristolainen, A; Kruusmaa, M
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains data for three different experiments presented in the paper:

    (1) moose_feet (40 files): The moose leg experiments are labeled as ax_y.nc,

    where 'a' indicates attached digits and 'f' indicates free digits. The

    number 'x' is either 1 (front leg) or 2 (hind leg), and the number 'y'

    is an increment from 0 to 9 representing the 10 samples of each set.

    (2) synthetic_feet (120 files): The synthetic feet experiments are labeled

    as lw_a_y.nc, where 'lw' (Low Water content) can be replaced by 'mw'

    (Medium Water content) or 'vw' (Vast Water content). The 'a' can be 'o'

    (Original Go1 foot), 'r' (Rigid extended foot), 'f' (Free digits anisotropic

    foot), or 'a' (Attached digits). Similar to (1), the last number is an increment from 0 to 9.

    (3) Go1 (15 files): The locomotion experiments of the quadruped robot on the

    track are labeled as condition_y.nc, where 'condition' is either 'hard_ground'

    for experiments on hard ground, 'bioinspired_feet' for the locomotion of the

    quadruped on mud using bio-inspired anisotropic feet, or 'original_feet' for

    experiments where the robot used the original Go1 feet. The 'y' is an increment from 0 to 4.

    The files for moose_feet and synthetic_feet contain timestamp (s), position (m), and force (N) data.

    The files for Go1 contain timestamp (s), position (rad), velocity (rad/s), torque (Nm) data for all 12 motors, and the distance traveled by the robot (m).

    All files can be read using xarray datasets (https://docs.xarray.dev/en/stable/generated/xarray.Dataset.html).

  3. RibonanzaNet-Drop Train, Val, and Test Data

    • kaggle.com
    zip
    Updated Feb 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hamish Blair (2024). RibonanzaNet-Drop Train, Val, and Test Data [Dataset]. https://www.kaggle.com/datasets/hmblair/ribonanzanet-drop-train-val-and-test-data
    Explore at:
    zip(402567233 bytes)Available download formats
    Dataset updated
    Feb 19, 2024
    Authors
    Hamish Blair
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    onemil1_1.nc is the train dataset. onemil1_2.nc is the validation dataset. onemil2.nc, p240.nc, and p390.nc are the test datasets.

    These files are in .nc format; use xarray with Python to interface with them.

  4. Zebrafish with lyz:EGFP expressing neutrophils: Mesh Well Inserts Z-stack 1

    • zenodo.org
    nc
    Updated Aug 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Efromson; John Efromson (2023). Zebrafish with lyz:EGFP expressing neutrophils: Mesh Well Inserts Z-stack 1 [Dataset]. http://doi.org/10.5281/zenodo.8035205
    Explore at:
    ncAvailable download formats
    Dataset updated
    Aug 17, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    John Efromson; John Efromson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    96-well plate z-stack of zebrafish with lyz:EGFP expressing neutrophils acquired with a multi-camera array microscope (MCAM)(Ramona Optics Inc., Durham, NC, USA). Mesh well inserts are used and half of the zebrafish on the plate were injected with csf3r morpholino. The overall z-stack is broken into four files.

    HDF5 files can be opened using open source Python software: https://docs.xarray.dev/

  5. l

    LFX Insights metrics for xarray

    • insights.linuxfoundation.org
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LFX Insights (2025). LFX Insights metrics for xarray [Dataset]. https://insights.linuxfoundation.org/project/pydata-xarray
    Explore at:
    Dataset updated
    May 23, 2025
    Dataset authored and provided by
    LFX Insights
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    active_days, github_forks, github_stars, issues_closed, issues_opened, press_mentions, github_mentions, merge_lead_time, social_mentions, package_downloads, and 25 more
    Measurement technique
    Contributor activity over rolling windows, OpenSSF Criticality reference, Repository event aggregation, Controls assessment based on documented standards
    Description

    Comprehensive open source project metrics including contributor activity, popularity trends, development velocity, and security assessments for xarray.

  6. E

    SUPERSEDED - CARDAMOM driving data and C-cycle model outputs to accompany...

    • find.data.gov.scot
    • dtechtive.com
    pdf, txt, zip
    Updated Aug 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Global Change Institute, School of GeoSciences, University of Edinburgh (2022). SUPERSEDED - CARDAMOM driving data and C-cycle model outputs to accompany 'Resolving scale-variance in the carbon dynamics of fragmented, mixed-use landscapes estimated using Model-Data Fusion' [Dataset]. http://doi.org/10.7488/ds/3509
    Explore at:
    zip(378.1 MB), pdf(0.496 MB), txt(0.0166 MB), zip(277 MB)Available download formats
    Dataset updated
    Aug 23, 2022
    Dataset provided by
    Global Change Institute, School of GeoSciences, University of Edinburgh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    UNITED KINGDOM
    Description

    '## This item has been replaced by the one which can be found at https://datashare.ed.ac.uk/handle/10283/4849 - https://doi.org/10.7488/ds/3843 ##' This archive contains the driving data and selected model outputs to accompany the manuscript: 'Resolving scale-variance in the carbon dynamics of fragmented, mixed-use landscapes estimated using Model-Data Fusion', submitted to Biogeosciences Discussions. The archive contains two zip files containing: (i) the observations and driving data assimilated into CARDAMOM; and (ii) a selection of model output, including the carbon (C) stocks for each DALEC pool, and a compilation of key C fluxes. Data and model output are stored as netcdf files. The xarray package (https://docs.xarray.dev/en/stable/index.html) provides a convenient starting point for using netcdf files within python environments. More details are provided in the document 'Milodowski_etal_dataset_description.pdf'

  7. Pydata/Xarray: V0.9.1

    • eprints.soton.ac.uk
    Updated Sep 24, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas (2019). Pydata/Xarray: V0.9.1 [Dataset]. http://doi.org/10.5281/zenodo.264282
    Explore at:
    Dataset updated
    Sep 24, 2019
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas
    Description

    Renamed the "Unindexed dimensions" section in the Dataset and DataArray repr (added in v0.9.0) to "Dimensions without coordinates".

  8. Data from: Deep learning four decades of human migration: datasets

    • zenodo.org
    csv, nc
    Updated Oct 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Gaskin; Thomas Gaskin; Guy Abel; Guy Abel (2025). Deep learning four decades of human migration: datasets [Dataset]. http://doi.org/10.5281/zenodo.17344747
    Explore at:
    csv, ncAvailable download formats
    Dataset updated
    Oct 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Thomas Gaskin; Thomas Gaskin; Guy Abel; Guy Abel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Zenodo repository contains all migration flow estimates associated with the paper "Deep learning four decades of human migration." Evaluation code, training data, trained neural networks, and smaller flow datasets are available in the main GitHub repository, which also provides detailed instructions on data sourcing. Due to file size limits, the larger datasets are archived here.

    Data is available in both NetCDF (.nc) and CSV (.csv) formats. The NetCDF format is more compact and pre-indexed, making it suitable for large files. In Python, datasets can be opened as xarray.Dataset objects, enabling coordinate-based data selection.

    Each dataset uses the following coordinate conventions:

    • Year: 1990–2023
    • Birth ISO: Country of birth (UN ISO3)
    • Origin ISO: Country of origin (UN ISO3)
    • Destination ISO: Destination country (UN ISO3)
    • Country ISO: Used for net migration data (UN ISO3)

    The following data files are provided:

    • T.nc: Full table of flows disaggregated by country of birth. Dimensions: Year, Birth ISO, Origin ISO, Destination ISO
    • flows.nc: Total origin-destination flows (equivalent to T summed over Birth ISO). Dimensions: Year, Origin ISO, Destination ISO
    • net_migration.nc: Net migration data by country. Dimensions: Year, Country ISO
    • stocks.nc: Stock estimates for each country pair. Dimensions: Year, Origin ISO (corresponding to Birth ISO), Destination ISO
    • test_flows.nc: Flow estimates on a randomly selected set of test edges, used for model validation

    Additionally, two CSV files are provided for convenience:

    • mig_unilateral.csv: Unilateral migration estimates per country, comprising:
      • imm: Total immigration flows
      • emi: Total emigration flows
      • net: Net migration
      • imm_pop: Total immigrant population (non-native-born)
      • emi_pop: Total emigrant population (living abroad)
    • mig_bilateral.csv: Bilateral flow data, comprising:
      • mig_prev: Total origin-destination flows
      • mig_brth: Total birth-destination flows, where Origin ISO reflects place of birth

    Each dataset includes a mean variable (mean estimate) and a std variable (standard deviation of the estimate).

    An ISO3 conversion table is also provided.

  9. Geospatial Analysis with Xarray

    • kaggle.com
    zip
    Updated Jul 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TAG (2023). Geospatial Analysis with Xarray [Dataset]. https://www.kaggle.com/datasets/tagg27/geospatial-analysis-with-xarray
    Explore at:
    zip(33082857 bytes)Available download formats
    Dataset updated
    Jul 8, 2023
    Authors
    TAG
    Description

    Dataset

    This dataset was created by TAG

    Contents

  10. d

    (HS 2) Automate Workflows using Jupyter notebook to create Large Extent...

    • search.dataone.org
    • hydroshare.org
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Young-Don Choi (2024). (HS 2) Automate Workflows using Jupyter notebook to create Large Extent Spatial Datasets [Dataset]. http://doi.org/10.4211/hs.a52df87347ef47c388d9633925cde9ad
    Explore at:
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    Hydroshare
    Authors
    Young-Don Choi
    Description

    We implemented automated workflows using Jupyter notebooks for each state. The GIS processing, crucial for merging, extracting, and projecting GeoTIFF data, was performed using ArcPy—a Python package for geographic data analysis, conversion, and management within ArcGIS (Toms, 2015). After generating state-scale LES (large extent spatial) datasets in GeoTIFF format, we utilized the xarray and rioxarray Python packages to convert GeoTIFF to NetCDF. Xarray is a Python package to work with multi-dimensional arrays and rioxarray is rasterio xarray extension. Rasterio is a Python library to read and write GeoTIFF and other raster formats. Xarray facilitated data manipulation and metadata addition in the NetCDF file, while rioxarray was used to save GeoTIFF as NetCDF. These procedures resulted in the creation of three HydroShare resources (HS 3, HS 4 and HS 5) for sharing state-scale LES datasets. Notably, due to licensing constraints with ArcGIS Pro, a commercial GIS software, the Jupyter notebook development was undertaken on a Windows OS.

  11. ASTE Test Data

    • figshare.com
    application/x-gzip
    Updated Oct 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Timothy Smith (2020). ASTE Test Data [Dataset]. http://doi.org/10.6084/m9.figshare.13150859.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Oct 27, 2020
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Timothy Smith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Test data for ASTE Release 1 integration with ECCOv4-py.

  12. IAGOS-CARIBIC whole air sampler data (v2024.07.17)

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Oct 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanja Schuck; Tanja Schuck; Florian Obersteiner; Florian Obersteiner (2024). IAGOS-CARIBIC whole air sampler data (v2024.07.17) [Dataset]. http://doi.org/10.5281/zenodo.12755525
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 28, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tanja Schuck; Tanja Schuck; Florian Obersteiner; Florian Obersteiner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IAGOS-CARIBIC WSM files collection (v2024.07.17)

    Content

    IAGOS-CARIBIC_WSM_files_collection_20240717.zip contains merged IAGOS-CARIBIC whole air sampler data (CARIBIC-1 and CARIBIC-2; <https://www.caribic-atmospheric.com/>). There is one netCDF file per IAGOS-CARIBIC flight. Files were generated from NASA Ames 1001. For detailed content information, see global and variable attributes. Global attribute `na_file_header_[x]` contains the original NASA Ames file header as an array of strings, with [x] being one of the source files.

    Data Coverage

    The data set covers 22 years of CARIBIC data from 1997 to 2020, flight numbers 8 to 591. There is no data available after 2020. Also, note that data isn't available for all flight numbers within the [1, 591] range.

    Special note on CARIBIC-1 data

    CARIBIC-1 data only contains a subset of the variables found in CARIBIC-2 data files. To distinguish those two campaigns, use the global attribute 'mission'.

    File format

    netCDF v4, created with xarray, <https://docs.xarray.dev/en/stable/>. Default variable encoding was used (no compression etc.).

    Data availability

    This dataset is also available via our THREDDS server at KIT, <https://thredds.atmohub.kit.edu/dataset/iagos-caribic-whole-air-sampler-data>.

    Contact

    Tanja Schuck, whole air sampling system PI,

    Changelog

    • `2024.07.17`: revise ozone data for flights 294 to 591
    • `2024.01.22`: editorial changes, add Schuck et al. publications, data unchanged
    • `2024.01.12`: initial upload
  13. Z

    Sentinel-1 RTC imagery processed by ASF over central Himalaya in High...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marshall, Emma; Henderson, Scott; Cherian, Deepak; Scheick, Jessica (2022). Sentinel-1 RTC imagery processed by ASF over central Himalaya in High Mountain Asia [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7126242
    Explore at:
    Dataset updated
    Oct 28, 2022
    Dataset provided by
    University of Washington
    National Center for Atmospheric Research
    University of Utah
    University of New Hampshire
    Authors
    Marshall, Emma; Henderson, Scott; Cherian, Deepak; Scheick, Jessica
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Himalayas, High-mountain Asia
    Description

    This is a dataset of Sentinel-1 radiometric terrain corrected (RTC) imagery processed by the Alaska Satellite Facility covering a region within the Central Himalaya. It accompanies a tutorial demonstrating accessing and working with Sentinel-1 RTC imagery using xarray and other open source python packages.

  14. f

    xmitgcm test datasets

    • figshare.com
    application/gzip
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan Abernathey (2023). xmitgcm test datasets [Dataset]. http://doi.org/10.6084/m9.figshare.4033530.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    figshare
    Authors
    Ryan Abernathey
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Test datasets for use with xmitgcm.These data were generated by running mitgcm in different configurations. Each tar archive contain a folder full of mds *.data / *.meta files.

  15. xesmf netcdf files for testing

    • figshare.com
    application/x-gzip
    Updated Feb 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raphael Dussin (2025). xesmf netcdf files for testing [Dataset]. http://doi.org/10.6084/m9.figshare.28378283.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Feb 9, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Raphael Dussin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Testing files for the xesmf remapping package.

  16. QLKNN11D training set

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    bin, text/x-python
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Karel Lucas van de Plassche; Karel Lucas van de Plassche; Jonathan Citrin; Jonathan Citrin (2023). QLKNN11D training set [Dataset]. http://doi.org/10.5281/zenodo.8017522
    Explore at:
    bin, text/x-pythonAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Karel Lucas van de Plassche; Karel Lucas van de Plassche; Jonathan Citrin; Jonathan Citrin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    QLKNN11D training set

    This dataset contains a large-scale run of ~1 billion flux calculations of the quasilinear gyrokinetic transport model QuaLiKiz. QuaLiKiz is applied in numerous tokamak integrated modelling suites, and is openly available at https://gitlab.com/qualikiz-group/QuaLiKiz/. This dataset was generated with the 'QLKNN11D-hyper' tag of QuaLiKiz, equivalent to 2.8.1 apart from the negative magnetic shear filter being disabled. See https://gitlab.com/qualikiz-group/QuaLiKiz/-/tags/QLKNN11D-hyper for the in-repository tag.

    The dataset is appropriate for the training of learned surrogates of QuaLiKiz, e.g. with neural networks. See https://doi.org/10.1063/1.5134126 for a Physics of Plasmas publication illustrating the development of a learned surrogate (QLKNN10D-hyper) of an older version of QuaLiKiz (2.4.0) with a 300 million point 10D dataset. The paper is also available on arXiv https://arxiv.org/abs/1911.05617 and the older dataset on Zenodo https://doi.org/10.5281/zenodo.3497066. For an application example, see Van Mulders et al 2021 https://doi.org/10.1088/1741-4326/ac0d12, where QLKNN10D-hyper was applied for ITER hybrid scenario optimization. For any learned surrogates developed for QLKNN11D, the effective addition of the alphaMHD input dimension through rescaling the input magnetic shear (s) by s = s - alpha_MHD/2, as carried out in Van Mulders et al., is recommended.

    Related repositories:

    Data exploration

    The data is provided in 43 netCDF files. We advise opening single datasets using xarray or multiple datasets out-of-core using dask. For reference, we give the load times and sizes of a single variable that just depends on the scan size `dimx` below. This was tested single-core on a Intel Xeon 8160 CPU at 2.1 GHz and 192 GB of DDR4 RAM. Note that during loading, more memory is needed than the final number.

    Timing of dataset loading
    Amount of datasetsFinal in-RAM memory (GiB)

    Loading time single var

    (M:SS)
    110.30:09
    543.91:00
    1063.22:01
    1698.03:25
    17Out Of Memoryx:xx

    Full dataset

    The full dataset of QuaLiKiz in-and-output data is available on request. Note that this is 2.2 TiB of netCDF files!

  17. Part 2 of real-time testing data for: "Identifying data sources and physical...

    • zenodo.org
    application/gzip
    Updated Aug 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2024). Part 2 of real-time testing data for: "Identifying data sources and physical strategies used by neural networks to predict TC rapid intensification" [Dataset]. http://doi.org/10.5281/zenodo.13272877
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Aug 8, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Each file in the dataset contains machine-learning-ready data for one unique tropical cyclone (TC) from the real-time testing dataset. "Machine-learning-ready" means that all data-processing methods described in the journal paper have already been applied. This includes cropping satellite images to make them TC-centered; rotating satellite images to align them with TC motion (TC motion is always towards the +x-direction, or in the direction of increasing column number); flipping satellite images in the southern hemisphere upside-down; and normalizing data via the two-step procedure.

    The file name gives you the unique identifier of the TC -- e.g., "learning_examples_2010AL01.nc.gz" contains data for storm 2010AL01, or the first North Atlantic storm of the 2010 season. Each file can be read with the method `example_io.read_file` in the ml4tc Python library (https://zenodo.org/doi/10.5281/zenodo.10268620). However, since `example_io.read_file` is a lightweight wrapper for `xarray.open_dataset`, you can equivalently just use `xarray.open_dataset`. Variables in the table are listed below (the same printout produced by `print(xarray_table)`):

    Dimensions: (
    satellite_valid_time_unix_sec: 289,
    satellite_grid_row: 380,
    satellite_grid_column: 540,
    satellite_predictor_name_gridded: 1,
    satellite_predictor_name_ungridded: 16,
    ships_valid_time_unix_sec: 19,
    ships_storm_object_index: 19,
    ships_forecast_hour: 23,
    ships_intensity_threshold_m_s01: 21,
    ships_lag_time_hours: 5,
    ships_predictor_name_lagged: 17,
    ships_predictor_name_forecast: 129)
    Coordinates:
    * satellite_grid_row (satellite_grid_row) int32 2kB ...
    * satellite_grid_column (satellite_grid_column) int32 2kB ...
    * satellite_valid_time_unix_sec (satellite_valid_time_unix_sec) int32 1kB ...
    * ships_lag_time_hours (ships_lag_time_hours) float64 40B ...
    * ships_intensity_threshold_m_s01 (ships_intensity_threshold_m_s01) float64 168B ...
    * ships_forecast_hour (ships_forecast_hour) int32 92B ...
    * satellite_predictor_name_gridded (satellite_predictor_name_gridded) object 8B ...
    * satellite_predictor_name_ungridded (satellite_predictor_name_ungridded) object 128B ...
    * ships_valid_time_unix_sec (ships_valid_time_unix_sec) int32 76B ...
    * ships_predictor_name_lagged (ships_predictor_name_lagged) object 136B ...
    * ships_predictor_name_forecast (ships_predictor_name_forecast) object 1kB ...
    Dimensions without coordinates: ships_storm_object_index
    Data variables:
    satellite_number (satellite_valid_time_unix_sec) int32 1kB ...
    satellite_band_number (satellite_valid_time_unix_sec) int32 1kB ...
    satellite_band_wavelength_micrometres (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_longitude_deg_e (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_cyclone_id_string (satellite_valid_time_unix_sec) |S8 2kB ...
    satellite_storm_type_string (satellite_valid_time_unix_sec) |S2 578B ...
    satellite_storm_name (satellite_valid_time_unix_sec) |S10 3kB ...
    satellite_storm_latitude_deg_n (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_storm_longitude_deg_e (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_storm_intensity_number (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_storm_u_motion_m_s01 (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_storm_v_motion_m_s01 (satellite_valid_time_unix_sec) float64 2kB ...
    satellite_predictors_gridded (satellite_valid_time_unix_sec, satellite_grid_row, satellite_grid_column, satellite_predictor_name_gridded) float64 474MB ...
    satellite_grid_latitude_deg_n (satellite_valid_time_unix_sec, satellite_grid_row, satellite_grid_column) float64 474MB ...
    satellite_grid_longitude_deg_e (satellite_valid_time_unix_sec, satellite_grid_row, satellite_grid_column) float64 474MB ...
    satellite_predictors_ungridded (satellite_valid_time_unix_sec, satellite_predictor_name_ungridded) float64 37kB ...
    ships_storm_intensity_m_s01 (ships_valid_time_unix_sec) float64 152B ...
    ships_storm_type_enum (ships_storm_object_index, ships_forecast_hour) int32 2kB ...
    ships_forecast_latitude_deg_n (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_forecast_longitude_deg_e (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_v_wind_200mb_0to500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_vorticity_850mb_0to1000km_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_vortex_latitude_deg_n (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_vortex_longitude_deg_e (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_mean_tangential_wind_850mb_0to600km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_max_tangential_wind_850mb_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_mean_tangential_wind_1000mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_mean_tangential_wind_850mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_mean_tangential_wind_500mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_mean_tangential_wind_300mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_srh_1000to700mb_200to800km_j_kg01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_srh_1000to500mb_200to800km_j_kg01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
    ships_threshold_exceedance_num_6hour_periods (ships_storm_object_index, ships_intensity_threshold_m_s01) int32 2kB ...
    ships_v_motion_observed_m_s01 (ships_storm_object_index) float64 152B ...
    ships_v_motion_1000to100mb_flow_m_s01 (ships_storm_object_index) float64 152B ...
    ships_v_motion_optimal_flow_m_s01 (ships_storm_object_index) float64 152B ...
    ships_cyclone_id_string (ships_storm_object_index) object 152B ...
    ships_storm_latitude_deg_n (ships_storm_object_index) float64 152B ...
    ships_storm_longitude_deg_e (ships_storm_object_index) float64 152B ...
    ships_predictors_lagged (ships_valid_time_unix_sec, ships_lag_time_hours, ships_predictor_name_lagged) float64 13kB ...
    ships_predictors_forecast (ships_valid_time_unix_sec, ships_forecast_hour, ships_predictor_name_forecast) float64 451kB ...

    Variable names are meant to be as self-explanatory as possible. Potentially confusing ones are listed below.

    • The dimension ships_storm_object_index is redundant with the dimension ships_valid_time_unix_sec and can be ignored.
    • ships_forecast_hour ranges up to values that we do not actually use in the paper. Keep in mind that our max forecast hour used in machine learning is 24.
    • The dimension ships_intensity_threshold_m_s01 (and any variable including this dimension) can be ignored.
    • ships_lag_time_hours corresponds to lag times for the SHIPS satellite-based predictors. The only lag time we use in machine learning is "NaN", which is a stand-in for the best available of all lag times. See the discussion of the "priority list" in the paper for more details.
    • Most of the data variables can be ignored, unless you're doing a deep dive into storm properties. The important variables are satellite_predictors_gridded (full satellite images), ships_predictors_lagged (satellite-based SHIPS predictors), and ships_predictors_forecast (environmental and storm-history-based SHIPS predictors). These variables are all discussed in the paper.
    • Every variable name (including elements of the coordinate lists ships_predictor_name_lagged and ships_predictor_name_forecast) includes units at the end. For example, "m_s01" = metres per second; "deg_n" = degrees north; "deg_e" = degrees east; "j_kg01" = Joules per kilogram; ...; etc.
  18. Data from: Supporting data for "Satellite derived SO2 emissions from the...

    • figshare.com
    • produccioncientifica.ugr.es
    hdf
    Updated May 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ben Esse; Mike Burton; Catherine Hayer; Melissa Pfeffer; Sara Barsotti; Nicolas Theys; Talfan Barnie; Manuel Titos (2023). Supporting data for "Satellite derived SO2 emissions from the relatively low-intensity, effusive 2021 eruption of Fagradalsfjall, Iceland" [Dataset]. http://doi.org/10.6084/m9.figshare.22303435.v1
    Explore at:
    hdfAvailable download formats
    Dataset updated
    May 2, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Ben Esse; Mike Burton; Catherine Hayer; Melissa Pfeffer; Sara Barsotti; Nicolas Theys; Talfan Barnie; Manuel Titos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Fagradalsfjall
    Description

    Supporting data for the paper "Satellite derived SO2 emissions from the relatively low-intensity, effusive 2021 eruption of Fagradalsfjall, Iceland" by Esse et al. The data files are in netCDF4 format, created using the Python xarray library. Each is a separate xarray Dataset.

    2021-05-02_18403_Fagradalsfjall_results.nc contains the analysis results for TROPOMI orbit 18403 shown in Figure 2.

    Fagradalsfjall_2021_emission_intensity.nc contains the SO2 emission intensity data shown in Figures 3, 4 and 5.

    cloud_effective_altitude_difference.nc contains the daily cloud effective altitude difference shown in figure 6.

  19. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dirk Kraaijpoel; Dirk Kraaijpoel (2023). TNO DGM5/VELMOD31 UTM31 xarray datasets [Dataset]. http://doi.org/10.5281/zenodo.10425411
Organization logo

TNO DGM5/VELMOD31 UTM31 xarray datasets

Explore at:
binAvailable download formats
Dataset updated
Dec 22, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Dirk Kraaijpoel; Dirk Kraaijpoel
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Contains the DGM5 geological model and the VELMOD 3.1 velocity model as xarray datasets in UTM31 coordinates.

Original data:

Details DGM-diep V5 | NLOG

Velmod-3.1 | NLOG

Format:

Xarray documentation

Search
Clear search
Close search
Google apps
Main menu