Renamed the "Unindexed dimensions" section in the Dataset
and DataArray
repr (added in v0.9.0) to "Dimensions without coordinates".
cmomy is a python package to calculate central moments and co-moments in a numerical stable and direct way. Behind the scenes, cmomy makes use of Numba to rapidly calculate moments. cmomy provides utilities to calculate central moments from individual samples, precomputed central moments, and precomputed raw moments. It also provides routines to perform bootstrap resampling based on raw data, or precomputed moments. cmomy has numpy array and xarray DataArray interfaces.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the data cube (an xarray DataArray) used in Jiang et al. 2023 Revisiting ε Eridani with NEID: Identifying New Activity-Sensitive Lines in a Young K Dwarf Star (in press). The cube contains all line parameters (centroid, depth, FWHM, and integrated flux) for each line in the compiled line list over 32 NEID observations of ε Eridani spanning a six-month period from September 2021 to February 2022, as well as the measured RV and activity indices for each observation. For information on how the line parameters are measured, see the paper.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset provides simulated data on plastic and substance flows and stocks in buildings and infrastructure as described in the data article "Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination". Besides simulated data, the repository contains input data and model files used to produce the simulated data.
Data & Data Visualization: The dataset contains input data and simulated data for the six main plastic applications in buildings and infrastructure in Germany in the period from 1950 to 2023, which are profiles, flooring, pipes, insulation material, cable insulations, and films. For each application the data are provided in a sub-directory (1_ ... 6_) following the structure described below.
Input Data:
The input data are stored in an xlsx-file with three sheets: flows, parameters, and data quality assessment. The data sources for all input data are detailed in the Supplementary Material of the linked Data in Brief article.
Simulated Data:
Simulated data are stored in a sub-folder, which contains:
Note: All files in the [product]/simulated_data folder are automatically replaced with updated model results upon execution of immec_dmfa_calculate_submodels.py.
To reduce storage requirements, data are stored in gzipped pickle files (.pkl.gz), while smaller files are provided as pickle files (.pkl). To open the files, users can use Python with the following code snippet:
import gzip
# Load a gzipped pickle file
with gzip.open("filename.pkl.gz", "rb") as f:
data = pickle.load(f)
# Load a regular pickle file
with open("filename.pkl", "rb") as f:
data = pickle.load(f)
Please note that opening pickle files requires compatible versions of numpy
and pandas
, as the files may have been created using version-specific data structures. If you encounter errors, ensure your package versions match those used during file creation (pandas: 2.2.3, numpy: 2.2.4).
Simulated data are provided as Xarray datasets, a data structure designed for efficient handling, analysis, and visualization of multi-dimensional labeled data. For more details on using Xarray, please refer to the official documentation: https://docs.xarray.dev/en/stable/
Core Model Files:
Computational Considerations:
During model execution, large arrays are generated, requiring significant memory. To enable computation on standard computers, Monte Carlo simulations are split into multiple chunks:
Dependencies
The model relies on the ODYM framework. To run the model, ODYM must be downloaded from https://github.com/IndEcol/ODYM (S. Pauliuk, N. Heeren, ODYM — An open software framework for studying dynamic material systems: Principles, implementation, and data structures, Journal of Industrial Ecology 24 (2020) 446–458. https://doi.org/10.1111/jiec.12952.)
7_Model_Structure:
8_Additional_Data: This folder contains supplementary data used in the model, including substance concentrations, data quality assessment scores, open-loop recycling distributions, and lifetime distributions.
The dataset was generated using a dynamic material flow analysis (dMFA) model. For a complete methodology description, refer to the Data in Brief article (add DOI).
If you use this dataset, please cite: Schmidt, S., Verni, X.-F., Gibon, T., Laner, D. (2025). Dataset for: Plastics in the German Building and Infrastructure Sector: A High-Resolution Dataset on Historical Flows, Stocks, and Legacy Substance Contamination, Zenodo. DOI: 10.5281/zenodo.15049210
This dataset is licensed under CC BY-NC 4.0, permitting use, modification, and distribution for non-commercial purposes, provided that proper attribution is given.
For questions or further details, please contact:
Sarah Schmidt
Center for Resource Management and Solid Waste Engineering
University of Kassel
Email: sarah.schmidt@uni-kassel.de
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Renamed the "Unindexed dimensions" section in the Dataset
and DataArray
repr (added in v0.9.0) to "Dimensions without coordinates".