44 datasets found
  1. s

    Pydata/Xarray: V0.9.1

    • eprints.soton.ac.uk
    Updated Sep 24, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas (2019). Pydata/Xarray: V0.9.1 [Dataset]. http://doi.org/10.5281/zenodo.264282
    Explore at:
    Dataset updated
    Sep 24, 2019
    Dataset provided by
    Zenodo
    Authors
    Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas
    Description

    Renamed the "Unindexed dimensions" section in the Dataset and DataArray repr (added in v0.9.0) to "Dimensions without coordinates".

  2. u

    Data from: Community Earth System Model v2 Large Ensemble (CESM2 LENS)

    • data.ucar.edu
    • oidc.rda.ucar.edu
    • +1more
    zarr
    Updated Nov 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Danabasoglu, Gokhan; Deser, Clara; Rodgers Axel Timmermann, Keith (2024). Community Earth System Model v2 Large Ensemble (CESM2 LENS) [Dataset]. https://data.ucar.edu/dataset/community-earth-system-model-v2-large-ensemble-cesm2-lens
    Explore at:
    zarrAvailable download formats
    Dataset updated
    Nov 11, 2024
    Dataset provided by
    Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory
    Authors
    Danabasoglu, Gokhan; Deser, Clara; Rodgers Axel Timmermann, Keith
    Time period covered
    Jan 1, 1850 - Dec 31, 2014
    Description

    The US National Center for Atmospheric Research partnered with the IBS Center for Climate Physics in South Korea to generate the CESM2 Large Ensemble which consists of 100 ensemble members at 1 degree spatial resolution covering the period 1850-2100 under CMIP6 historical and SSP370 future radiative forcing scenarios. Data sets from this ensemble were made downloadable via the Climate Data Gateway on June 14, 2021. NCAR has copied a subset (currently ~500 TB) of CESM2 LENS data to Amazon S3 as part of the AWS Public Datasets Program. To optimize for large-scale analytics we have represented the data as ~275 Zarr stores format accessible through the Python Xarray library. Each Zarr store contains a single physical variable for a given model run type and temporal frequency (monthly, daily).

  3. d

    (HS 2) Automate Workflows using Jupyter notebook to create Large Extent...

    • search.dataone.org
    • hydroshare.org
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Young-Don Choi (2024). (HS 2) Automate Workflows using Jupyter notebook to create Large Extent Spatial Datasets [Dataset]. http://doi.org/10.4211/hs.a52df87347ef47c388d9633925cde9ad
    Explore at:
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    Hydroshare
    Authors
    Young-Don Choi
    Description

    We implemented automated workflows using Jupyter notebooks for each state. The GIS processing, crucial for merging, extracting, and projecting GeoTIFF data, was performed using ArcPy—a Python package for geographic data analysis, conversion, and management within ArcGIS (Toms, 2015). After generating state-scale LES (large extent spatial) datasets in GeoTIFF format, we utilized the xarray and rioxarray Python packages to convert GeoTIFF to NetCDF. Xarray is a Python package to work with multi-dimensional arrays and rioxarray is rasterio xarray extension. Rasterio is a Python library to read and write GeoTIFF and other raster formats. Xarray facilitated data manipulation and metadata addition in the NetCDF file, while rioxarray was used to save GeoTIFF as NetCDF. These procedures resulted in the creation of three HydroShare resources (HS 3, HS 4 and HS 5) for sharing state-scale LES datasets. Notably, due to licensing constraints with ArcGIS Pro, a commercial GIS software, the Jupyter notebook development was undertaken on a Windows OS.

  4. TNO DGM5/VELMOD31 UTM31 xarray datasets

    • zenodo.org
    bin
    Updated Dec 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dirk Kraaijpoel; Dirk Kraaijpoel (2023). TNO DGM5/VELMOD31 UTM31 xarray datasets [Dataset]. http://doi.org/10.5281/zenodo.10425411
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 22, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dirk Kraaijpoel; Dirk Kraaijpoel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contains the DGM5 geological model and the VELMOD 3.1 velocity model as xarray datasets in UTM31 coordinates.

    Original data:

    Details DGM-diep V5 | NLOG

    Velmod-3.1 | NLOG

    Format:

    Xarray documentation

  5. Z

    Sentinel-1 RTC imagery processed by ASF over central Himalaya in High...

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Oct 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Henderson, Scott (2022). Sentinel-1 RTC imagery processed by ASF over central Himalaya in High Mountain Asia [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7126242
    Explore at:
    Dataset updated
    Oct 28, 2022
    Dataset provided by
    Scheick, Jessica
    Marshall, Emma
    Henderson, Scott
    Cherian, Deepak
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Himalayas, High-mountain Asia
    Description

    This is a dataset of Sentinel-1 radiometric terrain corrected (RTC) imagery processed by the Alaska Satellite Facility covering a region within the Central Himalaya. It accompanies a tutorial demonstrating accessing and working with Sentinel-1 RTC imagery using xarray and other open source python packages.

  6. ASTE Test Data

    • figshare.com
    application/x-gzip
    Updated Oct 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Timothy Smith (2020). ASTE Test Data [Dataset]. http://doi.org/10.6084/m9.figshare.13150859.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Oct 27, 2020
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Timothy Smith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Test data for ASTE Release 1 integration with ECCOv4-py.

  7. d

    Materials for the CUAHSI's Workshops at the National Water Center Bootcamp...

    • search.dataone.org
    • hydroshare.org
    • +1more
    Updated Jul 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Irene Garousi-Nejad; Anthony M. Castronova (2024). Materials for the CUAHSI's Workshops at the National Water Center Bootcamp 2024 [Dataset]. https://search.dataone.org/view/sha256%3A2db498bba4a2d5c65a191c2366ab5a898ceaa7f70ddd45d032270ab7b015e368
    Explore at:
    Dataset updated
    Jul 13, 2024
    Dataset provided by
    Hydroshare
    Authors
    Irene Garousi-Nejad; Anthony M. Castronova
    Time period covered
    Jun 19, 2024 - Jun 21, 2024
    Description

    This resource includes materials for two workshops: (1) FAIR Data Management and (2) Advanced Application of Python for Hydrology and Scientific Storytelling, both prepared for presentation at the NWC Summer Institute BootCamp 2024.

  8. e

    EDR-test

    • polytope-edr.ecmwf.int
    csv, html, json +1
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). EDR-test [Dataset]. https://polytope-edr.ecmwf.int/collections/edr
    Explore at:
    json, html, jsonld, csvAvailable download formats
    Dataset updated
    Dec 3, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 30, 2000 - Oct 30, 2007
    Area covered
    Description

    Test of EDR data with xarray

  9. h

    next-day-wildfire-spread

    • huggingface.co
    Updated Aug 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrzej Szablewski (2024). next-day-wildfire-spread [Dataset]. https://huggingface.co/datasets/TheRootOf3/next-day-wildfire-spread
    Explore at:
    Dataset updated
    Aug 9, 2024
    Authors
    Andrzej Szablewski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Next Day Wildfire Spread Dataset

    This dataset is an xarray version of the original Next Day Wildfire Spread dataset. It comes in three splits: train, eval and test. Note: Given the original dataset does not contain spatio-temporal information, the xarray coordinates has been set to arbitrary ranges (0-63 for spatial dimensions and 0-number_of_samples for the temporal dimension).

      Example
    

    To open a train split of the dataset and show an elevation plot at time=2137:… See the full description on the dataset page: https://huggingface.co/datasets/TheRootOf3/next-day-wildfire-spread.

  10. Z

    Dataset for the article: Robotic Feet Modeled After Ungulates Improve...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Godon, S (2024). Dataset for the article: Robotic Feet Modeled After Ungulates Improve Locomotion on Soft Wet Grounds [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12673096
    Explore at:
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Ristolainen, A
    Godon, S
    Kruusmaa, M
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains data for three different experiments presented in the paper:

    (1) moose_feet (40 files): The moose leg experiments are labeled as ax_y.nc,

    where 'a' indicates attached digits and 'f' indicates free digits. The

    number 'x' is either 1 (front leg) or 2 (hind leg), and the number 'y'

    is an increment from 0 to 9 representing the 10 samples of each set.

    (2) synthetic_feet (120 files): The synthetic feet experiments are labeled

    as lw_a_y.nc, where 'lw' (Low Water content) can be replaced by 'mw'

    (Medium Water content) or 'vw' (Vast Water content). The 'a' can be 'o'

    (Original Go1 foot), 'r' (Rigid extended foot), 'f' (Free digits anisotropic

    foot), or 'a' (Attached digits). Similar to (1), the last number is an increment from 0 to 9.

    (3) Go1 (15 files): The locomotion experiments of the quadruped robot on the

    track are labeled as condition_y.nc, where 'condition' is either 'hard_ground'

    for experiments on hard ground, 'bioinspired_feet' for the locomotion of the

    quadruped on mud using bio-inspired anisotropic feet, or 'original_feet' for

    experiments where the robot used the original Go1 feet. The 'y' is an increment from 0 to 4.

    The files for moose_feet and synthetic_feet contain timestamp (s), position (m), and force (N) data.

    The files for Go1 contain timestamp (s), position (rad), velocity (rad/s), torque (Nm) data for all 12 motors, and the distance traveled by the robot (m).

    All files can be read using xarray datasets (https://docs.xarray.dev/en/stable/generated/xarray.Dataset.html).

  11. H

    Collection of Materials for the CUAHSI's Workshops at the National Water...

    • hydroshare.org
    zip
    Updated Jun 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Irene Garousi-Nejad; Anthony M. Castronova (2025). Collection of Materials for the CUAHSI's Workshops at the National Water Center Bootcamp 2025 [Dataset]. https://www.hydroshare.org/resource/bae7841593ec40929a2f6cd6b5871c9c
    Explore at:
    zip(832 bytes)Available download formats
    Dataset updated
    Jun 10, 2025
    Dataset provided by
    HydroShare
    Authors
    Irene Garousi-Nejad; Anthony M. Castronova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This resource includes materials for two workshops: (1) FAIR data management and collaborating on simulation data in the cloud (2) Advanced application of Python for working with high value environmental datasets (3) Configuring and running a NextGen simulation and analyzing model outputs

  12. f

    xesmf netcdf files for testing

    • figshare.com
    application/x-gzip
    Updated Feb 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raphael Dussin (2025). xesmf netcdf files for testing [Dataset]. http://doi.org/10.6084/m9.figshare.28378283.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Feb 9, 2025
    Dataset provided by
    figshare
    Authors
    Raphael Dussin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Testing files for the xesmf remapping package.

  13. H

    Replication Data for: Beating the spectroscopic Rayleigh limit via...

    • dataverse.harvard.edu
    Updated Nov 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wiktor Krokosz; Mateusz Mazelanik; Michał Lipka; Marcin Jarzyna; Wojciech Wasilewski; Konrad Banaszek; Michał Parniak (2023). Replication Data for: Beating the spectroscopic Rayleigh limit via post-processed heterodyne detection [Dataset]. http://doi.org/10.7910/DVN/F4LRZR
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 16, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Wiktor Krokosz; Mateusz Mazelanik; Michał Lipka; Marcin Jarzyna; Wojciech Wasilewski; Konrad Banaszek; Michał Parniak
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Normalized variances calculated using the method described in the article, based on experimental data. Data is stored using Xarray, specifically in the NetCDF format. Data can be easily accessed using the Xarray Python library, specifically by calling xarray.open_dataset() The dataset is structured as follows: two N-dimensional DataArrays, one corresponding for calculations with time displacements (labeled as time) and one for calculations with phase displacements with the time centroid already picked (labeled as final) each DataArray has 5 dimensions: SNR, eps (separation), ph_disp/disp (displacement), sample/sample_time (bootstrapped sample), supersample (ensemble of bootstrapped samples) coordinates label the parameters along each dimension Usage examples Opening the dataset import numpy as np import xarray as xr variances = xr.open_dataset("coherent.nc") Obtaining parameter estimates def get_centroid_indices(variances): return np.bincount( variances.argmin( dim="disp" if "disp" in variances.dims else "ph_disp" ).values.flatten() ) def get_centroid_index(variances): return np.argmax(get_centroid_indices(variances)) def epsilon_estimator(eps): return 4 * np.sqrt(np.clip(var, 0, None)) time_centroid_estimates = variances["time"].idxmin(dim="disp") phase_centroid_estimates = variances["final"].idxmin(dim="ph_disp") epsilon_estimates = eps_estimator( variances["final"].isel(ph_disp=common.get_centroid_index(variances["final"])) ) Calculating and plotting precision def plot(estimates): estimator_variances = estimates.var( dim="sample" if "sample" in estimates.dims else "sample_time" ) precision = ( 1.0 / estimator_variances.snr / variances.attrs["SAMPLE_SIZE"] / estimator_variances ) precision = precision.where(xr.apply_ufunc(np.isfinite, precision), other=0) mean_precision = precision.mean(dim="supersample") mean_precision = mean_precision.where(np.isfinite(mean_precision), 0) precision_error = 2 * precision.std(dim="supersample").fillna(0) g = mean_precision.plot.scatter( x="eps", col="snr", col_wrap=2, sharex=True, sharey=True, ) for ax, snr in zip(g.axs.flat, snrs): ax.errorbar( precision.eps.values, mean_precision.sel(snr=snr), yerr=precision_error.sel(snr=snr), fmt="o", ) plot(time_centroid_estimates) plot(phase_centroid_estimates) plot(epsilon_estimates)

  14. Training and test datasets used for building graph convolutional deep neural...

    • figshare.com
    hdf
    Updated Sep 5, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prakash Chandra Rathi; R. Frederick Ludlow; Marcel L. Verdonk (2019). Training and test datasets used for building graph convolutional deep neural network model for prediction molecular electrostatic surfaces [Dataset]. http://doi.org/10.6084/m9.figshare.9768071.v1
    Explore at:
    hdfAvailable download formats
    Dataset updated
    Sep 5, 2019
    Dataset provided by
    figshare
    Authors
    Prakash Chandra Rathi; R. Frederick Ludlow; Marcel L. Verdonk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The model build using these datasets can be found at https://github.com/AstexUK/ESP_DNN/tree/master/esp_dnnThe dataset themselves can be opened using xarray Python library (http://xarray.pydata.org/en/stable/#)

  15. GloCE v1.0: Global CO2 Enhancement Dataset 2019-2023

    • zenodo.org
    bin
    Updated Apr 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yulun Zhou; Yulun Zhou; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster (2025). GloCE v1.0: Global CO2 Enhancement Dataset 2019-2023 [Dataset]. http://doi.org/10.5281/zenodo.15209825
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 22, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yulun Zhou; Yulun Zhou; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster; Pingyu Fan; Jiangong Liu; Yuan Xu; Bo Huang; Chris Webster
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present a globally consistent, satellite-derived dataset of CO_2 enhancement (ΔXCO_2), quantifying the spatially resolved excess in atmospheric CO_2 concentrations as a collective consequence of anthropogenic emissions and terrestrial carbon uptake. This dataset is generated from the deviations of NASA's OCO-3 satellite retrievals comprising 54 million observations across more than 200 countries from 2019 to 2023.

    • The dataset is now encrypted and will be openly accessable once the article review process completes.
    • If you are eager for early access of the dataset, please email

    Dear reviewers, please download the datasets here and access using the password enclosed in the review documents. Many thanks!

    Data Descriptions -----------------------------------------

    • CO2_enhancement_global.nc contains all enhancement data globally.
    • CO2_enhancement_cities.nc contains all enhancement data in global urban areas.
    • Each data row contains the following columns:
    • Datasets are stored in netcdf files and can be accessed using the Python code below:

    # install pre-requests

    ! pip install netcdf4
    ! pip install h5netcdf

    # read co2 enhancement data
    import xarray as xr
    fn = './CO2_Enhancements_Global.nc'
    data = xr.open_dataset(fn)
    type(data)

    Please cite at least one of the following for any use of the CO2E dataset.

    Zhou, Y.*, Fan, P., Liu, J., Xu, Y., Huang, B., Webster, C. (2025). GloCE v1.0: Global CO2 Enhancement Dataset 2019-2023 [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15209825

    Fan, P., Liu, J., Xu, Y., Huang, B., Webster, C., & Zhou, Y*. (Under Review) A global dataset of CO2 enhancements during 2019-2023.

    For any data inquiries, please email Yulun Zhou at yulunzhou@hku.hk.

  16. H

    Replication Data for: Rydberg-atom-based system for benchmarking...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Nov 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastian Borówka; Wiktor Krokosz; Mateusz Mazelanik; Wojciech Wasilewski; Michał Parniak (2024). Replication Data for: Rydberg-atom-based system for benchmarking millimeter-wave automotive radar chips [Dataset]. http://doi.org/10.7910/DVN/OYUNJ1
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 19, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Sebastian Borówka; Wiktor Krokosz; Mateusz Mazelanik; Wojciech Wasilewski; Michał Parniak
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Simulation Data The waveplate.hdf5 file stores the results of the FDTD simulation that are visualized in Fig. 3 b)-d). The simulation was performed using the Tidy 3D Python library and also utilizes its methods for data visualization. The following snippet can be used to visualize the data: import tidy3d as td import matplotlib.pyplot as plt sim_data: td.SimulationData = td.SimulationData.from_file(f"waveplate.hdf5") fig, axs = plt.subplots(1, 2, tight_layout=True, figsize=(12, 5)) for fn, ax in zip(("Ex", "Ey"), axs): sim_data.plot_field("field_xz", field_name=fn, val="abs^2", ax=ax).set_aspect(1 / 10) ax.set_xlabel("x [$\mu$m]") ax.set_ylabel("z [$\mu$m]") fig.show() Measurement Data Signal data used for plotting Fig. 4-6. The data is stored in NetCDF providing self describing data format that is easy to manipulate using the Xarray Python library, specifically by calling xarray.open_dataset() Three datasets are provided and structured as follows: The electric_fields.nc dataset contains data displayed in Fig. 4. It has 3 data variables, corresponding to the signals themselves, as well as estimated Rabi frequencies and electric fields. The freq dimension is the x-axis and contains coordinates for the Probe field detuning in MHz. The n dimension labels different configurations of applied electric field, with the 0th one having no EHF field. The detune.nc dataset contains data displayed in Fig. 6. It has 2 data variables, corresponding to the signals themselves, as well as estimated peak separations, multiplied by the coupling factor. The freq dimension is the same, while the detune dimension labels different EHF field detunings, from -100 to 100 MHz with a step of 10. The waveplates.nc dataset contains data displayed in Fig. 5. It contains estimated Rabi frequencies calculated for different waveplate positions. The angles are stored in radians. There is the quarter- and half-waveplate to choose from. Usage examples Opening the dataset import matplotlib.pyplot as plt import xarray as xr electric_fields_ds = xr.open_dataset("data/electric_fields.nc") detuned_ds = xr.open_dataset("data/detune.nc") waveplates_ds = xr.open_dataset("data/waveplates.nc") sigmas_da = xr.open_dataarray("data/sigmas.nc") peak_heights_da = xr.open_dataarray("data/peak_heights.nc") Plotting the Fig. 4 signals and printing params fig, ax = plt.subplots() electric_fields_ds["signals"].plot.line(x="freq", hue="n", ax=ax) print(f"Rabi frequencies [Hz]: {electric_fields_ds['rabi_freqs'].values}") print(f"Electric fields [V/m]: {electric_fields_ds['electric_fields'].values}") fig.show() Plotting the Fig. 5 data (waveplates_ds["rabi_freqs"] ** 2).plot.scatter(x="angle", col="waveplate") Plotting the Fig. 6 signals for chosen detunes fig, ax = plt.subplots() detuned_ds["signals"].sel( detune=[ -100, -70, -40, 40, 70, 100, ] ).plot.line(x="freq", hue="detune", ax=ax) fig.show() Plotting the Fig. 6 inset plot fig, ax = plt.subplots() detuned_ds["separations"].plot.scatter(x="detune", ax=ax) ax.plot( detuned_ds.detune, np.sqrt(detuned_ds.detune**2 + detuned_ds["separations"].sel(detune=0) ** 2), ) fig.show() Plotting the Fig. 7 calculated peak widths sigmas_da.plot.scatter() Plotting the Fig. 8 calculated detuned smaller peak heights peak_heights_da.plot.scatter()

  17. o

    ABS spin

    • explore.openaire.eu
    Updated Jan 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David van Driel (2023). ABS spin [Dataset]. http://doi.org/10.5281/zenodo.7220681
    Explore at:
    Dataset updated
    Jan 27, 2023
    Authors
    David van Driel
    Description

    Data and code for "Spin-filtered measurements of Andreev Bound States" van Driel, David; Wang, Guanzhong; Dvir, Tom This folder contains the raw data and code used to generate the plots for the paper Spin-filtered measurements of Andreev Bound States (arXiv: ??). To run the Jupyter notebook, install Anaconda and execute: conda env create -f environment.yml followed by: conda activate spinABS Finally, jupyter notebook to launch the notebook called 'zenodo_notebook.ipynb'. Raw data are stored in netCDF (.nc) format. The files are exported by the data acquisition package QCoDeS and can be read as an xarray Dataset.

  18. IMMEC_dMFA_historic: Dataset and code for "Plastics in the German Building...

    • zenodo.org
    zip
    Updated Apr 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Schmidt; Sarah Schmidt; Xavier-François Verni; Xavier-François Verni; Thomas Gibon; Thomas Gibon; David Laner; David Laner (2025). IMMEC_dMFA_historic: Dataset and code for "Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination" [Dataset]. http://doi.org/10.5281/zenodo.15049210
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sarah Schmidt; Sarah Schmidt; Xavier-François Verni; Xavier-François Verni; Thomas Gibon; Thomas Gibon; David Laner; David Laner
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    1. Dataset Description

    This dataset provides simulated data on plastic and substance flows and stocks in buildings and infrastructure as described in the data article "Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination". Besides simulated data, the repository contains input data and model files used to produce the simulated data.

    Files Included

    Data & Data Visualization: The dataset contains input data and simulated data for the six main plastic applications in buildings and infrastructure in Germany in the period from 1950 to 2023, which are profiles, flooring, pipes, insulation material, cable insulations, and films. For each application the data are provided in a sub-directory (1_ ... 6_) following the structure described below.

    Input Data:
    The input data are stored in an xlsx-file with three sheets: flows, parameters, and data quality assessment. The data sources for all input data are detailed in the Supplementary Material of the linked Data in Brief article.

    Simulated Data:
    Simulated data are stored in a sub-folder, which contains:

    • Data visualization:
      • flows_and_stocks_by_product_type.png: Illustration of consumed products, in-use-stocks, and end-of-life flows, aggregated by product type (median values).
      • flows_and_stocks_by_polymer.png: Illustration of consumed products, in-use-stocks, and end-of-life flows, aggregated by polymer (median values).
      • flows_and_stocks_with_uncertainty.png: Illustration of consumed products, in-use-stocks, and end-of-life flows, aggregated by product (median values and 68% confidence interval).
      • contaminants_in_F3-4.png: Illustration of simulated legacy contaminant concentrations in consumed products (median values and 68% confidence interval).
      • contaminants_in_F4-5.png: Illustration of simulated legacy contaminant concentrations in end-of-life-flows (median values and 68% confidence interval).
    • Data:
      • simulated_data_[product].xlsx – Time series of flow and stock values, aggregated by product, type, polymer, and substance. Each data point includes:
        • Mean
        • Standard deviation
        • Median
        • 2.5%-quantile
        • 16%-quantile
        • 84%-quantile
        • 97.5%-quantile
      • MFA_model.pkl.gz – Model structure and input parameters, including:
      • Model classification – A dictionary summarizing the model structure {model_dimension: [items per model dimension]}
      • param_df – A dataframe containing input parameter values for each Monte Carlo run
      • outputmatrix.pkl.gz – Matrix of deterministic values
      • openlooprecycling.pkl – Xarray DataArray containing flow values of flow E7.1 for open-loop recycling (only available for sub-models that generate recycled plastics for open-loop recycling)
      • full_arrays-folder (contains non-aggregated data for all Monte Carlo runs):
        • flow_[flow_ID].pkl / stock_[stock_ID].pkl – Complete simulated flow and stock data.

    Note: All files in the [product]/simulated_data folder are automatically replaced with updated model results upon execution of immec_dmfa_calculate_submodels.py.

    To reduce storage requirements, data are stored in gzipped pickle files (.pkl.gz), while smaller files are provided as pickle files (.pkl). To open the files, users can use Python with the following code snippet:

    import gzip
    
    # Load a gzipped pickle file
    with gzip.open("filename.pkl.gz", "rb") as f:
      data = pickle.load(f)
    
    # Load a regular pickle file
    with open("filename.pkl", "rb") as f:
      data = pickle.load(f)

    Please note that opening pickle files requires compatible versions of numpy and pandas, as the files may have been created using version-specific data structures. If you encounter errors, ensure your package versions match those used during file creation (pandas: 2.2.3, numpy: 2.2.4).

    Simulated data are provided as Xarray datasets, a data structure designed for efficient handling, analysis, and visualization of multi-dimensional labeled data. For more details on using Xarray, please refer to the official documentation: https://docs.xarray.dev/en/stable/

    Core Model Files:

    • immec_dmfa_calculate_submodels.py – The primary model file, orchestrating the execution by calling functions from other files, running simulations, and storing results.
    • immec_dmfa_setup.py – Sets up the material flow model, imports all input data in the required format, and stores simulated data.
    • immec_dmfa_calculations.py – Implements mass balance equations and stock modeling equations to solve the model.
    • immec_dmfa_visualization.py – Provides functions to visualize simulated flows, stocks, and substance concentrations.
    • requirements.txt – Lists the required Python packages for running the model.

    Computational Considerations:
    During model execution, large arrays are generated, requiring significant memory. To enable computation on standard computers, Monte Carlo simulations are split into multiple chunks:

    • The number of runs per chunk is specified for each submodel in model_aspects.xlsx.
    • The number of chunks is set in immec_dmfa_calculate_submodels.py.

    Dependencies
    The model relies on the ODYM framework. To run the model, ODYM must be downloaded from https://github.com/IndEcol/ODYM (S. Pauliuk, N. Heeren, ODYM — An open software framework for studying dynamic material systems: Principles, implementation, and data structures, Journal of Industrial Ecology 24 (2020) 446–458. https://doi.org/10.1111/jiec.12952.)

    7_Model_Structure:

    • model_aspects.xlsx: Overview of model items in each dimension of each sub-model
    • parameters.xlsx: Overview of model parameters
    • processes.xlsx: Overview of processes
    • flows.xlsx: Overview of flows (P_Start and P_End mark the process-ID of the source and target of each flow)
    • stocks.xlsx: Overview of stocks

    8_Additional_Data: This folder contains supplementary data used in the model, including substance concentrations, data quality assessment scores, open-loop recycling distributions, and lifetime distributions.

    • concentrations.xlsx – Substance concentrations in plastic products, provided as average, minimum, and maximum values.
    • pedigree.xlsx – Pedigree scores for data quality assessment, following the methodology described in: D. Laner, J. Feketitsch, H. Rechberger, J. Fellner (2016). A Novel Approach to Characterize Data Uncertainty in Material Flow Analysis and its Application to Plastics Flows in Austria. Journal of Industrial Ecology, 20, 1050–1063. https://doi.org/10.1111/jiec.12326.
    • open_loop_recycling.xlsx – Distribution of open-loop recycled plastics into other plastic applications in buildings and infrastructure.
    • Lifetime_Distributions
      • hibernation.xlsx – Assumed retention time of products in hibernating stocks.
      • lifetime_dict.pkl – Dictionary containing Weibull functions, used to determine the best fits for LifetimeInputs.xlsx.
      • LifetimeInputs.xlsx – Input data for identifying lifetime functions.
      • LifetimeParameters.xlsx – Derived lifetime parameters, used in dynamic stock modeling.
      • Lifetimes.ipynb – Jupyter Notebook containing code for identifying suitable lifetime distribution parameters

    2. Methodology

    The dataset was generated using a dynamic material flow analysis (dMFA) model. For a complete methodology description, refer to the Data in Brief article (add DOI).

    3. How to Cite This Dataset

    If you use this dataset, please cite: Schmidt, S., Verni, X.-F., Gibon, T., Laner, D. (2025). Dataset for: Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination, Zenodo. DOI: 10.5281/zenodo.15049210

    4. License & Access

    This dataset is licensed under CC BY-NC 4.0, permitting use, modification, and distribution for non-commercial purposes, provided that proper attribution is given.

    5. Contact Information

    For questions or further details, please contact:
    Sarah Schmidt
    Center for Resource Management and Solid Waste Engineering
    University of Kassel
    Email: sarah.schmidt@uni-kassel.de

  19. t

    ESA CCI SM GAPFILLED Long-term Climate Data Record of Surface Soil Moisture...

    • researchdata.tuwien.ac.at
    zip
    Updated Jun 6, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wolfgang Preimesberger; Wolfgang Preimesberger; Pietro Stradiotti; Pietro Stradiotti; Wouter Arnoud Dorigo; Wouter Arnoud Dorigo (2025). ESA CCI SM GAPFILLED Long-term Climate Data Record of Surface Soil Moisture from merged multi-satellite observations [Dataset]. http://doi.org/10.48436/3fcxr-cde10
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 6, 2025
    Dataset provided by
    TU Wien
    Authors
    Wolfgang Preimesberger; Wolfgang Preimesberger; Pietro Stradiotti; Pietro Stradiotti; Wouter Arnoud Dorigo; Wouter Arnoud Dorigo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    This dataset was produced with funding from the European Space Agency (ESA) Climate Change Initiative (CCI) Plus Soil Moisture Project (CCN 3 to ESRIN Contract No: 4000126684/19/I-NB "ESA CCI+ Phase 1 New R&D on CCI ECVS Soil Moisture"). Project website: https://climate.esa.int/en/projects/soil-moisture/

    This dataset contains information on the Surface Soil Moisture (SM) content derived from satellite observations in the microwave domain.

    Dataset paper (public preprint)

    A description of this dataset, including the methodology and validation results, is available at:

    Preimesberger, W., Stradiotti, P., and Dorigo, W.: ESA CCI Soil Moisture GAPFILLED: An independent global gap-free satellite climate data record with uncertainty estimates, Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2024-610, in review, 2025.

    Abstract

    ESA CCI Soil Moisture is a multi-satellite climate data record that consists of harmonized, daily observations coming from 19 satellites (as of v09.1) operating in the microwave domain. The wealth of satellite information, particularly over the last decade, facilitates the creation of a data record with the highest possible data consistency and coverage.
    However, data gaps are still found in the record. This is particularly notable in earlier periods when a limited number of satellites were in operation, but can also arise from various retrieval issues, such as frozen soils, dense vegetation, and radio frequency interference (RFI). These data gaps present a challenge for many users, as they have the potential to obscure relevant events within a study area or are incompatible with (machine learning) software that often relies on gap-free inputs.
    Since the requirement of a gap-free ESA CCI SM product was identified, various studies have demonstrated the suitability of different statistical methods to achieve this goal. A fundamental feature of such gap-filling method is to rely only on the original observational record, without need for ancillary variable or model-based information. Due to the intrinsic challenge, there was until present no global, long-term univariate gap-filled product available. In this version of the record, data gaps due to missing satellite overpasses and invalid measurements are filled using the Discrete Cosine Transform (DCT) Penalized Least Squares (PLS) algorithm (Garcia, 2010). A linear interpolation is applied over periods of (potentially) frozen soils with little to no variability in (frozen) soil moisture content. Uncertainty estimates are based on models calibrated in experiments to fill satellite-like gaps introduced to GLDAS Noah reanalysis soil moisture (Rodell et al., 2004), and consider the gap size and local vegetation conditions as parameters that affect the gapfilling performance.

    Summary

    • Gap-filled global estimates of volumetric surface soil moisture from 1991-2023 at 0.25° sampling
    • Fields of application (partial): climate variability and change, land-atmosphere interactions, global biogeochemical cycles and ecology, hydrological and land surface modelling, drought applications, and meteorology
    • Method: Modified version of DCT-PLS (Garcia, 2010) interpolation/smoothing algorithm, linear interpolation over periods of frozen soils. Uncertainty estimates are provided for all data points.
    • More information: See Preimesberger et al. (2025) and https://doi.org/10.5281/zenodo.8320869" target="_blank" rel="noopener">ESA CCI SM Algorithm Theoretical Baseline Document [Chapter 7.2.9] (Dorigo et al., 2023)

    Programmatic Download

    You can use command line tools such as wget or curl to download (and extract) data for multiple years. The following command will download and extract the complete data set to the local directory ~/Download on Linux or macOS systems.

    #!/bin/bash

    # Set download directory
    DOWNLOAD_DIR=~/Downloads

    base_url="https://researchdata.tuwien.at/records/3fcxr-cde10/files"

    # Loop through years 1991 to 2023 and download & extract data
    for year in {1991..2023}; do
    echo "Downloading $year.zip..."
    wget -q -P "$DOWNLOAD_DIR" "$base_url/$year.zip"
    unzip -o "$DOWNLOAD_DIR/$year.zip" -d $DOWNLOAD_DIR
    rm "$DOWNLOAD_DIR/$year.zip"
    done

    Data details

    The dataset provides global daily estimates for the 1991-2023 period at 0.25° (~25 km) horizontal grid resolution. Daily images are grouped by year (YYYY), each subdirectory containing one netCDF image file for a specific day (DD), month (MM) in a 2-dimensional (longitude, latitude) grid system (CRS: WGS84). The file name has the following convention:

    ESACCI-SOILMOISTURE-L3S-SSMV-COMBINED_GAPFILLED-YYYYMMDD000000-fv09.1r1.nc

    Data Variables

    Each netCDF file contains 3 coordinate variables (WGS84 longitude, latitude and time stamp), as well as the following data variables:

    • sm: (float) The Soil Moisture variable reflects estimates of daily average volumetric soil moisture content (m3/m3) in the soil surface layer (~0-5 cm) over a whole grid cell (0.25 degree).
    • sm_uncertainty: (float) The Soil Moisture Uncertainty variable reflects the uncertainty (random error) of the original satellite observations and of the predictions used to fill observation data gaps.
    • sm_anomaly: Soil moisture anomalies (reference period 1991-2020) derived from the gap-filled values (`sm`)
    • sm_smoothed: Contains DCT-PLS predictions used to fill data gaps in the original soil moisture field. These values are also provided for cases where an observation was initially available (compare `gapmask`). In this case, they provided a smoothed version of the original data.
    • gapmask: (0 | 1) Indicates grid cells where a satellite observation is available (1), and where the interpolated (smoothed) values are used instead (0) in the 'sm' field.
    • frozenmask: (0 | 1) Indicates grid cells where ERA5 soil temperature is <0 °C. In this case, a linear interpolation over time is applied.

    Additional information for each variable is given in the netCDF attributes.

    Version Changelog

    Changes in v9.1r1 (previous version was v09.1):

    • This version uses a novel uncertainty estimation scheme as described in Preimesberger et al. (2025).

    Software to open netCDF files

    These data can be read by any software that supports Climate and Forecast (CF) conform metadata standards for netCDF files, such as:

    References

    • Preimesberger, W., Stradiotti, P., and Dorigo, W.: ESA CCI Soil Moisture GAPFILLED: An independent global gap-free satellite climate data record with uncertainty estimates, Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2024-610, in review, 2025.
    • Dorigo, W., Preimesberger, W., Stradiotti, P., Kidd, R., van der Schalie, R., van der Vliet, M., Rodriguez-Fernandez, N., Madelon, R., & Baghdadi, N. (2023). ESA Climate Change Initiative Plus - Soil Moisture Algorithm Theoretical Baseline Document (ATBD) Supporting Product Version 08.1 (version 1.1). Zenodo. https://doi.org/10.5281/zenodo.8320869
    • Garcia, D., 2010. Robust smoothing of gridded data in one and higher dimensions with missing values. Computational Statistics & Data Analysis, 54(4), pp.1167-1178. Available at: https://doi.org/10.1016/j.csda.2009.09.020
    • Rodell, M., Houser, P. R., Jambor, U., Gottschalck, J., Mitchell, K., Meng, C.-J., Arsenault, K., Cosgrove, B., Radakovich, J., Bosilovich, M., Entin, J. K., Walker, J. P., Lohmann, D., and Toll, D.: The Global Land Data Assimilation System, Bulletin of the American Meteorological Society, 85, 381 – 394, https://doi.org/10.1175/BAMS-85-3-381, 2004.

    Related Records

    The following records are all part of the Soil Moisture Climate Data Records from satellites community

    1

    ESA CCI SM MODELFREE Surface Soil Moisture Record

    <a href="https://doi.org/10.48436/svr1r-27j77" target="_blank"

  20. h

    dwd

    • huggingface.co
    Updated Feb 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob (2024). dwd [Dataset]. https://huggingface.co/datasets/jacobbieker/dwd
    Explore at:
    Dataset updated
    Feb 11, 2024
    Authors
    Jacob
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Card for DWD Observations

    This dataset is a collection of historical German Weather Service (DWD) weather station observations at 10 minutely, and hourly resolutions for various parameters. The data has been converted to Zarr and Xarray. The data was gathered using the wonderful wetterdienst package.

      Dataset Details
    
    
    
    
    
      Dataset Description
    

    Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]:… See the full description on the dataset page: https://huggingface.co/datasets/jacobbieker/dwd.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas (2019). Pydata/Xarray: V0.9.1 [Dataset]. http://doi.org/10.5281/zenodo.264282

Pydata/Xarray: V0.9.1

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Sep 24, 2019
Dataset provided by
Zenodo
Authors
Hoyer, Stephan; Fitzgerald, Clark; Hamman, Joe; Akleeman,; Kluyver, Thomas; Maussion, Fabien; Roos, Maximilian; Markel,; Helmus, Jonathan J.; Cable, Pete; Wolfram, Phillip; Bovy, Benoit; Abernathey, Ryan; Noel, Vincent; Kanmae, Takeshi; Miles, Alistair; Hill, Spencer; Crusaderky,; Sinclair, Scott; Filipe,; Guedes, Rafael; Ebrevdo,; Chunweiyuan,; Delley, Yves; Wilson, Robin; Signell, Julia; Laliberte, Frederic; Malevich, Brewster; Hilboll, Andreas
Description

Renamed the "Unindexed dimensions" section in the Dataset and DataArray repr (added in v0.9.0) to "Dimensions without coordinates".

Search
Clear search
Close search
Google apps
Main menu