100+ datasets found
  1. Slivisu: A visual analytics tool to validate simulation models against...

    • dataservices.gfz-potsdam.de
    Updated 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrea Unger; Daniela Rabe; Volker Klemann; Daniel Eggert; Doris Dransch; Andrea Unger; Daniela Rabe; Doris Dransch (2018). Slivisu: A visual analytics tool to validate simulation models against collected data [Dataset]. http://doi.org/10.5880/gfz.1.5.2018.007
    Explore at:
    Dataset updated
    2018
    Dataset provided by
    DataCitehttps://www.datacite.org/
    GFZ Data Services
    Authors
    Andrea Unger; Daniela Rabe; Volker Klemann; Daniel Eggert; Doris Dransch; Andrea Unger; Daniela Rabe; Doris Dransch
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    The validation of a simulation model is a crucial task in model development. It involves the comparison of simulation data to observation data and the identification of suitable model parameters. SLIVISU is a Visual Analytics framework that enables geoscientists to perform these tasks for observation data that is sparse and uncertain. Primarily, SLIVISU was designed to evaluate sea level indicators, which are geological or archaeological samples supporting the reconstruction of former sea level over the last ten thousands of years and are compiled in a postgreSQL database system. At the same time, the software aims at supporting the validation of numerical sea-level reconstructions against this data by means of visual analytics.

  2. Monthly mean climate data from a transient simulation with the Whole...

    • catalogue.ceda.ac.uk
    • data-search.nerc.ac.uk
    Updated Jul 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ingrid Cnossen (2022). Monthly mean climate data from a transient simulation with the Whole Atmosphere Community Climate Model eXtension (WACCM-X) from 2015 to 2070 [Dataset]. https://catalogue.ceda.ac.uk/uuid/45283390b97c4a27861d74b3d915b0bd
    Explore at:
    Dataset updated
    Jul 29, 2022
    Dataset provided by
    Centre for Environmental Data Analysishttp://www.ceda.ac.uk/
    Authors
    Ingrid Cnossen
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Time period covered
    Jan 1, 2015 - Dec 31, 2070
    Area covered
    Earth
    Variables measured
    atmosphere_hybrid_sigma_pressure_coordinate
    Description

    This dataset comprises monthly mean data from a global, transient simulation with the Whole Atmosphere Community Climate Model eXtension (WACCM-X) from 2015 to 2070. WACCM-X is a global atmosphere model covering altitudes from the surface up to ~500 km, i.e., including the troposphere, stratosphere, mesosphere and thermosphere. WACCM-X version 2.0 (Liu et al., 2018) was used, part of the Community Earth System Model (CESM) release 2.1.0 (http://www.cesm.ucar.edu/models/cesm2) made available by the National Center for Atmospheric Research. The model was run in free-running mode with a horizontal resolution of 1.9 degrees latitude and 2.5 degrees longitude (giving 96 latitude points and 144 longitude points) and 126 vertical levels. Further description of the model and simulation setup is provided by Cnossen (2022) and references therein. A large number of variables is included on standard monthly mean output files on the model grid, while selected variables are also offered interpolated to a constant height grid or vertically integrated in height (details below). Zonal mean and global mean output files are included as well.

    The data are provided in NetCDF format and file names have the following structure:

    f.e210.FXHIST.f19_f19.h1a.cam.h0.[YYYY]-[MM][DFT].nc

    where [YYYY] gives the year with 4 digits, [MM] gives the month (2 digits) and [DFT] specifies the data file type. The following data file types are included:

    1) Monthly mean output on the full grid for the full set of variables; [DFT] = 2) Zonal mean monthly mean output for the full set of variables; [DFT] = _zm
    3) Global mean monthly mean output for the full set of variables; [DFT] = _gm 4) Height-interpolated/-integrated output on the full grid for selected variables; [DFT] = _ht

    A cos(latitude) weighting was used when calculating the global means.

    Data were interpolated to a set of constant heights (61 levels in total) using the Z3GM variable (for variables output on midpoints, with 'lev' as the vertical coordinate) or the Z3GMI variable (for variables output on interfaces, with ilev as the vertical coordinate) stored on the original output files (type 1 above). Interpolation was done separately for each longitude, latitude and time.

    Mass density (DEN [g/cm3]) was calculated from the M_dens, N2_vmr, O2, and O variables on the original data files before interpolation to constant height levels.

    The Joule heating power QJ [W/m3] was calculated using Q_J = (sigma_P*B^2)*((u_i - U_n)^2 + (v_i-v_n)^2 + (w_i-w_n)^2) with sigma_P = Pedersen conductivity[S], B = geomagnetic field strength [T], ui, vi, and wi = zonal, meridional, and vertical ion velocities [m/s] and un, vn, and wn = neutral wind velocities [m/s]. QJ was integrated vertically in height (using a 2.5 km height grid spacing rather than the 61 levels on output file type 4) to give the JHH variable on the type 4 data files. The QJOULE variable also given is the Joule heating rate [K/s] at each of the 61 height levels.

    All data are provided as monthly mean files with one time record per file, giving 672 files for each data file type for the period 2015-2070 (56 years).

    References:

    Cnossen, I. (2022), A realistic projection of climate change in the upper atmosphere into the 21st century, in preparation.

    Liu, H.-L., C.G. Bardeen, B.T. Foster, et al. (2018), Development and validation of the Whole Atmosphere Community Climate Model with thermosphere and ionosphere extension (WACCM-X 2.0), Journal of Advances in Modeling Earth Systems, 10(2), 381-402, doi:10.1002/2017ms001232.

  3. f

    Laurel and Hardy 2 mean data and simulations

    • fairdomhub.org
    xlsx
    Updated Apr 8, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yin Hoon Chew; Daniel Seaton; Virginie Mengin (2022). Laurel and Hardy 2 mean data and simulations [Dataset]. https://fairdomhub.org/data_files/5003?version=1
    Explore at:
    xlsx(1.08 MB)Available download formats
    Dataset updated
    Apr 8, 2022
    Authors
    Yin Hoon Chew; Daniel Seaton; Virginie Mengin
    License

    https://spdx.org/licenses/CC0-1.0https://spdx.org/licenses/CC0-1.0

    Description

    Excel spreadsheet with data and simulations used to prepare figures for publication, see Metadata sheet for conditions. Data Fresh (not dry) rosette leaf biomass, measured in samples of 5 plants each on multiple days, as mean and SD; Simulation outputs from FMv2 for Col Wild Type plants, lsf1, and two simulations for prr7prr9 where the mutation affects only starch degradation or both starch degradation and malate/fumarate store mobilisation.

    Starch levels in carbon units (not C6) measured on on days 27-28, mean and SD, simulations as above Malate and fumarate levels in carbon units (not C4) measured on days 27-28, mean and SD, simulations as above Many simulation outputs from FMv2 runs in the conditions above, from the Matlab output file

  4. n

    Monthly mean climate data from a transient simulation with the Whole...

    • data-search.nerc.ac.uk
    • catalogue.ceda.ac.uk
    Updated Jul 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Monthly mean climate data from a transient simulation with the Whole Atmosphere Community Climate Model eXtension (WACCM-X) from 1950 to 2015 [Dataset]. https://data-search.nerc.ac.uk/geonetwork/srv/search?keyword=model
    Explore at:
    Dataset updated
    Jul 23, 2021
    Description

    This dataset comprises monthly mean data from a global, transient simulation with the Whole Atmosphere Community Climate Model eXtension (WACCM-X) from 1950 to 2015. WACCM-X is a global atmosphere model covering altitudes from the surface up to ~500 km, i.e. including the troposphere, stratosphere, mesosphere and thermosphere. WACCM-X version 2.0 (Liu et al., 2018) was used, part of the Community Earth System Model (CESM) release 2.1.0 made available by the US National Center for Atmospheric Research. The model was run in free-running mode with a horizontal resolution of 1.9° latitude 2.5° longitude (giving 96 latitude points and 144 longitude points) and 126 vertical levels. Further description of the model and simulation setup is provided by Cnossen (2020) and references therein. A large number of variables are included on standard monthly mean output files on the model grid, while selected variables are also offered interpolated to a constant height grid or vertically integrated in height (details below). Zonal mean and global mean output files are included as well. The following data file types are included: 1)Monthly mean output on the full grid for the full set of variables; [DFT] = '' 2)Zonal mean monthly mean output for the full set of variables; [DFT] = _zm 3)Global mean monthly mean output for the full set of variables; [DFT] = _gm 4)Height-interpolated/-integrated output on the full grid for selected variables; [DFT] = _ht A cos(latitude) weighting was used when calculating the global means. Data were interpolated to a set of constant heights (61 levels in total) using the Z3GM variable (for variables output on midpoints, with "lev" as the vertical coordinate) or the Z3GMI variable (for variables output on interfaces, with "ilev" as the vertical coordinate) stored on the original output files (type 1 above). Interpolation was done separately for each longitude, latitude and time. Mass density (DEN [g/cm3]) was calculated from the M_dens, N2_vmr, O2, and O variables on the original data files before interpolation to constant height levels. The Joule heating power QJ [W/m3] was calculated using Q_J=_P B^2 [(u_i-u_n )^2+(v_i-v_n )^2+(w_i-w_n )^2] with P = Pedersen conductivity [S], B = geomagnetic field strength [T], ui, vi, and wi = zonal, meridional, and vertical ion velocities [m/s] and un, vn, and wn = neutral wind velocities [m/s]. QJ was integrated vertically in height (using a 2.5 km height grid spacing rather than the 61 levels on output file type 4) to give the JHH variable on the type 4 data files. The QJOULE variable also given is the Joule heating rate [K/s] at each of the 61 height levels. All data are provided as monthly mean files with one time record per file, giving 792 files for each data file type for the period 1950-2015 (66 years).

  5. f

    Data from: Gradient Boosted Machine Learning Model to Predict H2, CH4, and...

    • acs.figshare.com
    • figshare.com
    xlsx
    Updated Jul 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tom Bailey; Adam Jackson; Razvan-Antonio Berbece; Kejun Wu; Nicole Hondow; Elaine Martin (2023). Gradient Boosted Machine Learning Model to Predict H2, CH4, and CO2 Uptake in Metal–Organic Frameworks Using Experimental Data [Dataset]. http://doi.org/10.1021/acs.jcim.3c00135.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 18, 2023
    Dataset provided by
    ACS Publications
    Authors
    Tom Bailey; Adam Jackson; Razvan-Antonio Berbece; Kejun Wu; Nicole Hondow; Elaine Martin
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Predictive screening of metal–organic framework (MOF) materials for their gas uptake properties has been previously limited by using data from a range of simulated sources, meaning the final predictions are dependent on the performance of these original models. In this work, experimental gas uptake data has been used to create a Gradient Boosted Tree model for the prediction of H2, CH4, and CO2 uptake over a range of temperatures and pressures in MOF materials. The descriptors used in this database were obtained from the literature, with no computational modeling needed. This model was repeated 10 times, showing an average R2 of 0.86 and a mean absolute error (MAE) of ±2.88 wt % across the runs. This model will provide gas uptake predictions for a range of gases, temperatures, and pressures as a one-stop solution, with the data provided being based on previous experimental observations in the literature, rather than simulations, which may differ from their real-world results. The objective of this work is to create a machine learning model for the inference of gas uptake in MOFs. The basis of model development is experimental as opposed to simulated data to realize its applications by practitioners. The real-world nature of this research materializes in a focus on the application of algorithms as opposed to the detailed assessment of the algorithms.

  6. F

    Data from: A generic gust definition and detection method based on...

    • data.uni-hannover.de
    • search.datacite.org
    zip
    Updated Jan 20, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AG PALM (2022). A generic gust definition and detection method based on wavelet-analysis [Dataset]. https://data.uni-hannover.de/sl/dataset/c76adf5a-9aa8-4ebe-bbbe-0e61436cf033
    Explore at:
    zip(96112)Available download formats
    Dataset updated
    Jan 20, 2022
    Dataset authored and provided by
    AG PALM
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    This dataset is associated with the paper Knoop et al. (2019) titled "A generic gust definition and detection method based on wavelet-analysis" published in "Advances in Science and Research (ASR)" within the Special Issue: 18th EMS Annual Meeting: European Conference for Applied Meteorology and Climatology 2018. It contains the data and analysis software required to recreate all figures in the publication.

  7. D

    Bias-corrected d4PDF historical and non-warming counterfactual climate...

    • search.diasjp.net
    Updated Sep 6, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Toshichika Iizumi (2018). Bias-corrected d4PDF historical and non-warming counterfactual climate simulation data [Dataset]. http://doi.org/10.20783/DIAS.544
    Explore at:
    Dataset updated
    Sep 6, 2018
    Dataset provided by
    Institute for Agro-Environmental Sciences, National Agriculture and Food Research Organization
    Authors
    Toshichika Iizumi
    Description

    The bias-corrected d4PDF dataset offers daily data of 10 climatic variables over the globe from 1951 to 2010. Data from the historical experiment and non-warming counterfactual simulation are available (at this moment, there is no plan to conduct bias-correction of data from the +4 degC experiment). See Shiogama et al. (2016), Mizuta et al. (2017) and Imada et al. (2017) for details on the original d4PDF database. For each simulation, data for 100-member ensemble are available. The data over the sea and Antarctica are not bias-corrected (i.e., the raw data of the MRI-AGCM3.2 (Mizuta et al., 2012) were used), whereas those over the land are bias-corrected using S14FD meteorological forecing dataset (doi:10.20783/DIAS.523) as the baseline. Variables include daily mean 2m air temperature (tave2m, °C), daily maximum 2m air temperature (tmax2m, °C), daily minimum 2m air temperature (tmin2m, °C), daily total precipitation (precsfc, mm d-1), daily mean downward shortwave radiation flux (dswrfsfc, W m-2), daily mean downward longwave radiation flux (dlwrfsfc, W m-2), daily mean 2m relative humidity (rh2m, %), daily mean 2m specific humidity (spfh2m, kg kg-1), daily mean 10m wind speed (wind10m, m s-1) and daily mean surface pressure (pressfc, hPa).

  8. Compute System Simulator: Simulation data and script for plotting Figure 6 -...

    • zenodo.org
    bin, png +1
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jarrod Leddy; Jarrod Leddy (2025). Compute System Simulator: Simulation data and script for plotting Figure 6 - reliability scan [Dataset]. http://doi.org/10.5281/zenodo.15270270
    Explore at:
    bin, png, text/x-pythonAvailable download formats
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jarrod Leddy; Jarrod Leddy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The python script has an option to either generate the data or just plot it. Becuase the software is still closed source, it should only be used for plotting. Therefore, the data are also included to generate the plot and the script has run_simulations set to false.

    This is a scan of mean time to failure, mean time to repair, and requested job size (# of GPUs) for a GB200 system. For these, the values are:

    allocation_sizes = [72, 68, 64, 60, 56, 52, 48, 44, 40] # gpus
    mttf = [100, 200, 400, 800, 1600, 3200] # days
    mttr = [1, 2, 4, 8, 14] # days

    resulting in 270 simulations.

    Requirements to run the plotting script:

    • python3
    • matplotlib
    • numpy
  9. J

    The estimation of utility-consistent labor supply models by means of...

    • journaldata.zbw.eu
    • jda-test.zbw.eu
    txt
    Updated Dec 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hans Bloemen; Arie Kapteyn; Hans Bloemen; Arie Kapteyn (2022). The estimation of utility-consistent labor supply models by means of simulated scores (replication data) [Dataset]. http://doi.org/10.15456/jae.2022319.0719493228
    Explore at:
    txt(614)Available download formats
    Dataset updated
    Dec 8, 2022
    Dataset provided by
    ZBW - Leibniz Informationszentrum Wirtschaft
    Authors
    Hans Bloemen; Arie Kapteyn; Hans Bloemen; Arie Kapteyn
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We consider a utility-consistent static labor supply model with flexible preferences and a nonlinear and possibly non-convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results.

  10. Initialization files and data supporting https://arxiv.org/abs/1506.09008

    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan M. Harrison; Flavio Romano; Thomas E. Ouldridge; Ard A. Louis; Jonathan Doye; Ryan M. Harrison; Flavio Romano; Thomas E. Ouldridge; Ard A. Louis; Jonathan Doye (2020). Initialization files and data supporting https://arxiv.org/abs/1506.09008 [Dataset]. http://doi.org/10.5281/zenodo.1753767
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ryan M. Harrison; Flavio Romano; Thomas E. Ouldridge; Ard A. Louis; Jonathan Doye; Ryan M. Harrison; Flavio Romano; Thomas E. Ouldridge; Ard A. Louis; Jonathan Doye
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ###Preamble###

    This upload contains initialization files and data for simulations reported in:
    https://arxiv.org/abs/1506.09008: Coarse-grained modelling of strong DNA bending II: Cyclization

    The initialization files allow a user to repeat the reported simulations using the oxDNA model. oxDNA is available for download from:
    https://dna.physics.ox.ac.uk/index.php/Main_Page.
    The use and meaning of the input and output files are documented extensively on this wiki.


    ###Organisation###

    A pdf copy of the main text and supplementary material of the relevant paper are provided as main.pdf and SI.pdf in the head directory. Simulations are organised by system type within subdirectories.

    #################

    The "Basic cyclization" folder contains the files for sequence-independent simulations of cyclization, for varying duplex and single-stranded overhang lengths. Folder DXXCYY corresponds to simulation of a cyclization system with Nd = XX and Nbp=YY, with the meaning of these symbols given in the paper. These simulations underlie:
    - The black data points in Fig. 3, 7 and S4 (alongside data from the simulations in the "Dimerization" folder),
    - The data in Fig. 4 and S5.
    - The data points connected by solid lines in Fig. 5.
    - The black data points in Fig. 6.
    - The black data points in Fig S2(a), and the data in Fig. S2(b)
    - The black data points in Fig. S3(a)

    #################

    The "Dimerization" folder contains the files for the simulations of dimerization, at the two reference concentrations 336nM and 2690nM. For simulations at 336nM, the folder bimolecularXX contains files for the sequence-independent simulation with XX = Nd1+Ns, and bimolecularXX_seq contains the sequence-dependent variants. For simulations at 2690nM, the folder bimolecularXX contains files for the sequence-independent simulation with XX = Nd1+Nd2+Ns, and bimolecularXX_seq contains the sequence-dependent variants. Note that the folders bimolecular73 and bimolecular101_seq were accidentally deleted, and the output data is missing. Input files for bimolecular73 have been recreated (the bimolecular101_seq has not been recreated because these simulations were of very limited importance for the paper).

    These simulations provide:
    - The data points in Fig. 3, 7 and S4 (alongside data from the simulations in the "Basic cyclization" folder and the "Perturbations" folder),
    - The data points connected by dashed lines in Fig. 5.
    - The grey data points in Fig. 6.
    - The data in Table S2.

    #################

    The "Perturbations" folder contains the files for cyclization simulations that do not correspond to sequence-averaged, defect-free systems.
    - Folders DXXCYY_seq and DXXCYY_* contain files relevant to simulations of cyclization with Nd = XX and Nbp=YY, incorporating sequence-dependence. These simulations underlie the brown/blue/mauve data points in Fig. 3, 7 and S2b (alongside data from the simulations in the "Dimerization" folder).
    - Folder D5969mm contains files relevant to the simulation of a mismatch containing-sytem, with data reported on Table S3 and Figure S3 (intact simulations can be found in the "Basic cyclization" folder).
    - Folders D87C97, D87C97n1, D87C97n2 and D87C97nn contains files relevant to the comparison of a system with no nicks, a nick in position 1, a nick in position 2, and a double nick, respectively. Data are reported in Table S3 and Fig. S3(b).


    ###Content###

    For each system, a "closed1" and an "open1" folder are present. These correspond to the two windows of umbrella sampling that were performed separately. Within each folder are the necessary initialization files to run the simulations exactly as reported in the paper, simply by calling oxDNA from within the folder, using "inputVMMC" as the input file. Also included are output files for a single realisation of the simulation. The meaning of these files are outlined at https://dna.physics.ox.ac.uk/index.php/Main_Page.

    Note that the results in the paper were all obtained from 5 independent replicas, using different initial conditions and different seeds. These can be (statistically) recreated simply by drawing random starting configurations from the single available traj_hist file.


    ###A note on topology###

    Many of these simulations were performed with "unique" topology, which prevents non-native base pairing. In the topology files, instead of indicating the base type with a letter (A, C, G or T) in the second column, an integer 0 < n or n > 10 is used instead.
    - If n modulo 4 =0, the base is treated as possessing the interaction strengths of A but will only bind to a base with a type m = 3-n.
    - If n modulo 4 =1, the base is treated as possessing the interaction strengths of C but will only bind to a base with a type m = 3-n.
    - If n modulo 4 =2, the base is treated as possessing the interaction strengths of G but will only bind to a base with a type m = 3-n.
    - If n modulo 4 =3, the base is treated as possessing the interaction strengths of T but will only bind to a base with a type m = 3-n.

    In addition, please note that the topology files for the dimerization simulations at 336nM were set out slightly strangely, in that base IDs are not assigned contiguously to contiguous sequences of bases in a strand at some points. Nonetheless, the connectivity specified by these topology files is correct.

  11. W

    Data from: ENSEMBLES CNRM-CM3 20C3M run5, daily values

    • wdc-climate.de
    Updated Aug 20, 2008
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Royer, Jean-Francois (2008). ENSEMBLES CNRM-CM3 20C3M run5, daily values [Dataset]. https://www.wdc-climate.de/ui/entry?acronym=ENSEMBLES_CNCM3_20C3M_5_D
    Explore at:
    Dataset updated
    Aug 20, 2008
    Dataset provided by
    World Data Center for Climate (WDCC) at DKRZ
    Authors
    Royer, Jean-Francois
    License

    http://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdfhttp://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdf

    Time period covered
    Jan 1, 1860 - Dec 31, 1999
    Area covered
    Description

    These data represent daily values (daily mean, instantaneous daily output, diurnal cycle) of selected variables for ENSEMBLES (http://www.ensembles-eu.org). The list of output variables can be found in: http://ensembles.wdc-climate.de/output-variables

    The ocean and sea ice initial states were those of year 40 of a control simulation (CT4) with the same model. Solar, volcanic variability and land use changes are taken into account. Forcing agents included: CO2,CH4,N2O,O3,CFC11(including other CFCs and HFCs),CFC12; sulfate(Boucher),BC,sea salt,desert dust aerosols.

    These datasets are available in netCDF format. The dataset names are composed of - centre/model acronym (e.g. CNCM3: CNRM/CM3) - scenario acronym (e.g. SRA2: SRES A2) - run number (e.g. 1: run 1) - time interval (MM:monthly mean, DM:daily mean, DC:diurnal cycle, 6H:6 hourly, 12h:12hourly) - variable acronym with level value --> example: CNCM3_SRA2_1_MM_hur850

    For this experiment 2 ensemble runs and 4 additional runs (run numbers 3 to 6) were carried out.

    Technical data to this experiment: CNRM-CM3 (2004): atmosphere: Arpege-Climat v3 (T42L45, cy 22b+); ocean: OPA8.1; sea ice: Gelato 3.10; river routing: TRIP

  12. NICAM simulation data used in a manuscript entitled "Impact of Aggregational...

    • zenodo.org
    application/gzip
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yutaro Nirasawa; Yutaro Nirasawa (2025). NICAM simulation data used in a manuscript entitled "Impact of Aggregational Growth Modeling on the Intensification of Extremely Intense Tropical Cyclones" [Dataset]. http://doi.org/10.5281/zenodo.15081386
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Mar 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yutaro Nirasawa; Yutaro Nirasawa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset includes outputs simulated in NICAM, targeting extremely intense tropical cyclones (TCs): BOLAVEN (2023), MAWAR (2023), NANMADOL (2022), RAI (2021), and SURIGAE (2021). Files in "TCcenter" include center location, center sea level pressure, and its 12 hourly mean value at each time step for observation data and each simulation data. Directories in "NEW" and "OLD" include output variables in each simulation. These outputs are azimuthally averaged around the TC center.

  13. Wind2Loads 6D load simulation database statistics

    • zenodo.org
    bin, txt
    Updated May 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nikolay Dimitrov; Nikolay Dimitrov (2023). Wind2Loads 6D load simulation database statistics [Dataset]. http://doi.org/10.5281/zenodo.7893722
    Explore at:
    txt, binAvailable download formats
    Dataset updated
    May 4, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Nikolay Dimitrov; Nikolay Dimitrov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    1. An Excel file providing a list of the environmental conditions used to generate the load simulations. Column names correspond to the variables that are used as load simulation inputs. An "F" prefix in the variable means this is the cumulative distribution function (CDF) of the distribution used to generate the random sample. If there is no F-prefix, the column corresponds to the physical value of the variable.

    2. Files containing the 10-minute statistics of load simulations. Files are delimited with semicolon ";"

    - Filenames specify details about the simulation: "PointNo" corresponds to the sample number listed in the "conditions list" Excel file. "SetNo" corresponds to the random realization (there are 5-8 random realization at each sample point). Each sample realization corresponds to 1h simulations, but the simulation data are split in parts of 10min (so there are 6 "parts" in each realization) - the "part" in the filename indicates which part of the realization this is.

    - The columns of each stats file are organized as follows (first column considered to have index 1):

    Column 1: File name

    For load channel number j, j = 1: n_channels:

    Column 7*(j-1) + 2: mean value of channel j

    Column 7*(j-1) + 3: standard deviation of channel j

    Column 7*(j-1) + 4: minimum of channel j

    Column 7*(j-1) + 5: maximum of channel j

    Column 7*(j-1) + 6: DEL4 - fatigue damage-equivalent load (DEL) with S-N curve slope m = 4, for channel j

    Column 7*(j-1) + 7: DEL8 - fatigue damage-equivalent load (DEL) with S-N curve slope m = 8, for channel j

    Column 7*(j-1) + 8: DEL12 - fatigue damage-equivalent load (DEL) with S-N curve slope m = 12, for channel j

  14. r

    Data from: JSON Dataset of Simulated Building Heat Control for System of...

    • researchdata.se
    • gimi9.com
    Updated Mar 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob Nilsson (2025). JSON Dataset of Simulated Building Heat Control for System of Systems Interoperability [Dataset]. http://doi.org/10.5878/e5hb-ne80
    Explore at:
    (438755370), (110041420), (156812), (5417)Available download formats
    Dataset updated
    Mar 21, 2025
    Dataset provided by
    Luleå University of Technology
    Authors
    Jacob Nilsson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Luleå Municipality
    Description

    Interoperability in systems-of-systems is a difficult problem due to the abundance of data standards and formats. Current approaches to interoperability rely on hand-made adapters or methods using ontological metadata. This dataset was created to facilitate research on data-driven interoperability solutions. The data comes from a simulation of a building heating system, and the messages sent within control systems-of-systems. For more information see attached data documentation.

    The data comes in two semicolon-separated (;) csv files, training.csv and test.csv. The train/test split is not random; training data comes from the first 80% of simulated timesteps, and the test data is the last 20%. There is no specific validation dataset, the validation data should instead be randomly selected from the training data. The simulation runs for as many time steps as there are outside temperature values available. The original SMHI data only samples once every hour, which we linearly interpolate to get one temperature sample every ten seconds. The data saved at each time step consists of 34 JSON messages (four per room and two temperature readings from the outside), 9 temperature values (one per room and outside), 8 setpoint values, and 8 actuator outputs. The data associated with each of those 34 JSON-messages is stored as a single row in the tables. This means that much data is duplicated, a choice made to make it easier to use the data.

    The simulation data is not meant to be opened and analyzed in spreadsheet software, it is meant for training machine learning models. It is recommended to open the data with the pandas library for Python, available at https://pypi.org/project/pandas/.

    The data file with temperatures (smhi-july-23-29-2018.csv) acts as input for the thermodynamic building simulation found on Github, where it is used to get the outside temperature and corresponding timestamps. Temperature data for Luleå Summer 2018 were downloaded from SMHI.

  15. d

    MODFLOW-NWT datasets for the simulation of the drainage infrastructure and...

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). MODFLOW-NWT datasets for the simulation of the drainage infrastructure and groundwater system response to changes in sea level and precipitation, Broward County, Florida [Dataset]. https://catalog.data.gov/dataset/modflow-nwt-datasets-for-the-simulation-of-the-drainage-infrastructure-and-groundwater-sys
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Broward County
    Description

    The U.S. Geological Survey, in cooperation with Broward County Environmental Planning and Resilience Division, has developed a groundwater/surface-water model to evaluate the response of the drainage infrastructure and groundwater system in Broward County to increases in sea level and potential changes in precipitation. The model was constructed using a modified version of MODFLOW-NWT, with the surface-water system represented using the Surface-Water Routing process and the Urban Runoff Process. The surface-water drainage system within this newly developed model actively simulates the extensive canal network using level-pool routing and active structures representing gates, weirs, culverts, and pumps. Steady-state and transient simulation results represented historical conditions (2013-17). Simulation results incorporating increased sea level and precipitation were used to evaluate the effects on the surface-water drainage system and wet season groundwater levels. Four future sea-level scenarios were simulated by modifying the historical inputs for both the steady-state and the transient models to represent mean sea levels of 0.5, 2.0, 2.5, and 3.0 ft above the North American Vertical Datum of 1988. This USGS data release contains all of the input and output files for the simulations described in the associated model documentation report. (https://doi.org/10.3133/sir20225074)

  16. f

    Data from: Predicting the Glass Transition Temperature of Biopolymers via...

    • acs.figshare.com
    txt
    Updated Apr 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Didac Martí; Rémi Pétuya; Emanuele Bosoni; Anne-Claude Dublanchet; Stephan Mohr; Fabien Léonforte (2024). Predicting the Glass Transition Temperature of Biopolymers via High-Throughput Molecular Dynamics Simulations and Machine Learning [Dataset]. http://doi.org/10.1021/acsapm.3c03040.s003
    Explore at:
    txtAvailable download formats
    Dataset updated
    Apr 11, 2024
    Dataset provided by
    ACS Publications
    Authors
    Didac Martí; Rémi Pétuya; Emanuele Bosoni; Anne-Claude Dublanchet; Stephan Mohr; Fabien Léonforte
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Nature has only provided us with a limited number of biobased and biodegradable building blocks. Therefore, the fine-tuning of the sustainable polymer properties is expected to be achieved through the control of the composition of biobased copolymers for targeted applications such as cosmetics. Until now, the main approaches to alleviate the experimental efforts and accelerate the discovery of polymers have relied on machine learning models trained on experimental data, which implies enormous and difficult work in the compilation of data from heterogeneous sources. On the other hand, molecular dynamics simulations of polymers have shown that they can accurately capture the experimental trends for a series of properties. However, the combination of different ratios of monomers in copolymers can rapidly lead to a combinatorial explosion, preventing investigation of all possibilities via molecular dynamics simulations. In this work, we show that the combination of machine learning approaches and high-throughput molecular dynamics simulations permits quick and efficient sampling and characterization of the relevant chemical design space for specific applications. Reliable simulation protocols have been implemented to evaluate the glass transition temperature of a series of 58 homopolymers, which exhibit good agreement with experiments, and 488 copolymers. Overall, 2,184 simulations (four replicas per polymer) were performed, for a total simulation time of 143.052 μs. These results, constituting a data set of 546 polymers, have been used to train a machine learning model for the prediction of the MD-calculated glass transition temperature with a mean absolute error of 19.34 K and an R2 score of 0.83. Overall, within its applicability domain, this machine learning model provides an impressive acceleration over molecular dynamics simulations: the glass transition temperature of thousands of polymers can be obtained within seconds, whereas it would have taken node-years to simulate them. This type of approach can be tuned to address different design spaces or different polymer properties and thus has the potential to accelerate the discovery of polymers.

  17. o

    Simulation data for rolling circle amplification shows a sinusoidal template...

    • ora.ox.ac.uk
    zip
    Updated Jan 1, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Presern, D (2017). Simulation data for rolling circle amplification shows a sinusoidal template length-dependent amplification bias [Dataset]. http://doi.org/10.5287/bodleian:VJJYJXOrg
    Explore at:
    zip(460940)Available download formats
    Dataset updated
    Jan 1, 2017
    Dataset provided by
    University of Oxford
    Authors
    Presern, D
    License

    https://ora.ox.ac.uk/terms_of_usehttps://ora.ox.ac.uk/terms_of_use

    Description

    Data was created using oxDNA2 on CentOS Linux (available at https://dna.physics.ox.ac.uk/index.php/Main_Page and https://sourceforge.net/projects/oxdna/ ; it functions on other UNIX-based operating systems including MacOSX as well), and manipulated primarily using awk, bash and python. All data is in Plain Text (suffixes other than .txt only convey meaning to human user and not computers). There are more details in the README file included in the .zip file, so everything is in one place - that is also in Plain Text

  18. Benchmarking dataset for multiskilled workforce planning with uncertain...

    • zenodo.org
    • data.niaid.nih.gov
    txt
    Updated Jan 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    César Augusto Henao; César Augusto Henao; Andrés Felipe Porto; Andrés Felipe Porto; Virginia I. González; Virginia I. González (2024). Benchmarking dataset for multiskilled workforce planning with uncertain demand [Dataset]. http://doi.org/10.5281/zenodo.10570229
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 26, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    César Augusto Henao; César Augusto Henao; Andrés Felipe Porto; Andrés Felipe Porto; Virginia I. González; Virginia I. González
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These datasets are related to the Data Article entitled: “A benchmark dataset for the retail multiskilled personnel planning under uncertain demand”, submitted to the Data Science Journal. This data article describes datasets from a home improvement retailer located in Santiago, Chile. The datasets were developed to solve a multiskilled personnel assignment problem (MPAP) under uncertain demand. Notably, these datasets were used in the published article "Multiskilled personnel assignment problem under uncertain demand: A benchmarking analysis" authored by Henao et al. (2022). Moreover, the datasets were also used in the published articles authored by Henao et al. (2016) and Henao et al. (2019) to solve MPAPs.

    The datasets include real and simulated data. Regarding the real dataset, it includes information about the store size, number of employees, employment-contract characteristics, mean value of weekly hours demand in each department, and cost parameters. Regarding the simulated datasets, they include information about the random parameter of weekly hours demand in each store department. The simulated data are presented in 18 text files classified by: (i) Sample type (in-sample or out-of-sample). (ii) Truncation-type method (zero-truncated or percentile-truncated). (iii) Coefficient of variation (5, 10, 20, 30, 40, 50%).

  19. u

    Southern Ocean Eddies simulations

    • rda.ucar.edu
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Southern Ocean Eddies simulations [Dataset]. https://rda.ucar.edu/lookfordata/datasets/?nb=y&b=topic&v=Atmosphere
    Explore at:
    Description

    This data includes select model output from a global, eddy-resolving, numerical simulation integrated with the ocean (Smith et al. 2010), sea-ice (Hunke and Lipscomb 2008) and marine biogeochemistry (Moore et ... al. 2013) components of the the Community Earth System Model (CESM1) (Hurrell et al. 2013), forced with atmospheric data from the Coordinated Ocean-ice Reference Experiment (CORE I) "normal year" (Large and Yeager 2004). This simulation was run for 5-years after initialization (see Harrison et al. (2018) for details on initialization), and model output was saved as 5-day means. Selected data streams include simulated physical and biogeochemical oceanographic data used in Rohr et al. (under review, a) and Rohr et al. (under review, b) to study the mechanisms by which Southern Ocean eddies modify the biogeochemistry data. See Rohr et al. (under review, a) for methods, results, and direction publicly available analysis tools.

  20. W

    Data from: ENSEMBLES EGMAM SRESA1B run3, monthly mean values

    • wdc-climate.de
    Updated Jun 15, 2006
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Niehörster, Falk (2006). ENSEMBLES EGMAM SRESA1B run3, monthly mean values [Dataset]. https://www.wdc-climate.de/ui/entry?acronym=ENSEMBLES_FUBEMA_SRA1B_3_MM
    Explore at:
    Dataset updated
    Jun 15, 2006
    Dataset provided by
    World Data Center for Climate (WDCC) at DKRZ
    Authors
    Niehörster, Falk
    License

    http://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdfhttp://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdf

    Time period covered
    Jan 1, 2000 - Dec 30, 2100
    Area covered
    Description

    These data represent monthly averaged values of selected variables for ENSEMBLES (http://www.ensembles-eu.org). The list of output variables can be found in: http://ensembles.wdc-climate.de/output-variables. The model output corresponds to the IPCC AR4 "720 ppm stabilization experiment (SRES A1B)". The A1B scenario is the part of the A1 family which describes a balance across all energy sources. The experiment has been initialized in year 2000 of the 20C3M_3 run and continues until year 2100 with greenhouse gas forcing (CO2, CH4, N2O, CFC-11*, CFC-12) according to the A1B scenario. Stabilization experiment has not been carried out for run 3. These datasets are available in netCDF format. The dataset names are composed of - centre/model acronym (FUBEMA: Freie Universitaet Berlin / EGMAM (=ECHO-G with Middle Atmosphere and Messy)) - scenario acronym (SRA1B: SRESA1B) - run number (3: run 3) - time interval (MM:monthly mean, DM:daily mean, DC:diurnal cycle, 6H:6 hourly, 12h:12hourly) - variable acronym with level value --> example: FUBEMA_SRA1B_3_MM_hur850 For this experiment 3 ensemble runs were carried out. For model output data in higher temporal resolution and more variables contact Falk Niehoerster Technical data to this experiment: The experiment used AGCM MA/ECHAM4 (T30L39) coupled to OGCM HOPE-G (T42 with equator refinement, L20) and was run on a NEC SX-6 (hurrikan.dkrz.de).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Andrea Unger; Daniela Rabe; Volker Klemann; Daniel Eggert; Doris Dransch; Andrea Unger; Daniela Rabe; Doris Dransch (2018). Slivisu: A visual analytics tool to validate simulation models against collected data [Dataset]. http://doi.org/10.5880/gfz.1.5.2018.007
Organization logo

Slivisu: A visual analytics tool to validate simulation models against collected data

Related Article
Explore at:
69 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
2018
Dataset provided by
DataCitehttps://www.datacite.org/
GFZ Data Services
Authors
Andrea Unger; Daniela Rabe; Volker Klemann; Daniel Eggert; Doris Dransch; Andrea Unger; Daniela Rabe; Doris Dransch
License

https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

Description

The validation of a simulation model is a crucial task in model development. It involves the comparison of simulation data to observation data and the identification of suitable model parameters. SLIVISU is a Visual Analytics framework that enables geoscientists to perform these tasks for observation data that is sparse and uncertain. Primarily, SLIVISU was designed to evaluate sea level indicators, which are geological or archaeological samples supporting the reconstruction of former sea level over the last ten thousands of years and are compiled in a postgreSQL database system. At the same time, the software aims at supporting the validation of numerical sea-level reconstructions against this data by means of visual analytics.

Search
Clear search
Close search
Google apps
Main menu