25 datasets found
  1. f

    Data from: Gradient Boosted Machine Learning Model to Predict H2, CH4, and...

    • figshare.com
    • acs.figshare.com
    zip
    Updated Jul 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tom Bailey; Adam Jackson; Razvan-Antonio Berbece; Kejun Wu; Nicole Hondow; Elaine Martin (2023). Gradient Boosted Machine Learning Model to Predict H2, CH4, and CO2 Uptake in Metal–Organic Frameworks Using Experimental Data [Dataset]. http://doi.org/10.1021/acs.jcim.3c00135.s002
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 18, 2023
    Dataset provided by
    ACS Publications
    Authors
    Tom Bailey; Adam Jackson; Razvan-Antonio Berbece; Kejun Wu; Nicole Hondow; Elaine Martin
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Predictive screening of metal–organic framework (MOF) materials for their gas uptake properties has been previously limited by using data from a range of simulated sources, meaning the final predictions are dependent on the performance of these original models. In this work, experimental gas uptake data has been used to create a Gradient Boosted Tree model for the prediction of H2, CH4, and CO2 uptake over a range of temperatures and pressures in MOF materials. The descriptors used in this database were obtained from the literature, with no computational modeling needed. This model was repeated 10 times, showing an average R2 of 0.86 and a mean absolute error (MAE) of ±2.88 wt % across the runs. This model will provide gas uptake predictions for a range of gases, temperatures, and pressures as a one-stop solution, with the data provided being based on previous experimental observations in the literature, rather than simulations, which may differ from their real-world results. The objective of this work is to create a machine learning model for the inference of gas uptake in MOFs. The basis of model development is experimental as opposed to simulated data to realize its applications by practitioners. The real-world nature of this research materializes in a focus on the application of algorithms as opposed to the detailed assessment of the algorithms.

  2. g

    CALCUL SUCCESSION RIGHTS Simulator

    • gimi9.com
    • data.europa.eu
    Updated Mar 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). CALCUL SUCCESSION RIGHTS Simulator [Dataset]. https://gimi9.com/dataset/eu_592ee34588ee38048d74eeb4
    Explore at:
    Dataset updated
    Mar 19, 2023
    License

    Licence Ouverte / Open Licence 1.0https://www.etalab.gouv.fr/wp-content/uploads/2014/05/Open_Licence.pdf
    License information was derived automatically

    Description

    The simulators developed by DILA use the G6K simulation engine whose sources can be accessed via the link: https://github.com/eureka2/G6K In addition to the sources, are available via this link: — The definition (steps, rules,.), in XSD format, valid for all simulators developed with the G6K engine; — The procedure for making available and installing the engine. The simulator data made available by DILA shall consist of: — an XML file for the definition of the simulator; — a data schema in JSON format; — data in JSON format. 리소스 XML 988492b9-71ab-4ace-9550-17e89b4e808d XML 다운로드 Definition of the simulator inheritance tax PDF f041e664-174e-4940-9d85-0101104c8c2c PDF 다운로드 Presentation of the Inheritance Tax Simulator

  3. d

    Orbitally forced simulated surface air temperature of the last 1,000,000...

    • b2find.dkrz.de
    Updated Mar 25, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). Orbitally forced simulated surface air temperature of the last 1,000,000 years - Dataset - B2FIND [Dataset]. https://b2find.dkrz.de/dataset/b4f1928b-f1cd-56cc-a0b3-6186bdc361ce
    Explore at:
    Dataset updated
    Mar 25, 2020
    Description

    We provide a climatic data set of the history of annual mean surface air temperature (SAT) over the last 1,000,000 years. The SAT (i.e. the temperature at 2 meter height above the ground) has been simulated by means of the Community Earth System Models (COSMOS, consisting of ECHAM5, JSBACH, MPIOM). These have been exposed to the solution of elements of the Earth's orbit around the sun (eccentricity, obliquity, longitude of the perihelion) by Laskar et al. (2004) for the last 1,000,000 years. Towards practical feasibility of the climate simulation, the orbital forcing has been accelerated by a factor of 100 based on the method described by Lorenz and Lohmann (2004). A detailed description of the COSMOS' application in the framework of paleoclimate can be found, for example, in the publication by Stepanek and Lohmann (2012). Description of the time axis:Time in the data set runs backward. This means that: -year 0000 in the data set refers to the year 1,000,000 before present -year 0001 refers to year 999,900 before present -etc. -year 10,000 in the data set refers to the year 0 before present"Present" refers to the astronomical standard epoch J2000.

  4. s

    Dataset for "Skyrmion states in thin confined polygonal nanostructures"

    • eprints.soton.ac.uk
    • data.niaid.nih.gov
    • +1more
    Updated Nov 27, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataset for "Skyrmion states in thin confined polygonal nanostructures" [Dataset]. https://eprints.soton.ac.uk/473355/
    Explore at:
    Dataset updated
    Nov 27, 2017
    Dataset provided by
    Zenodo
    Authors
    Hovorka, Ondrej; Albert, Maximilian; Wang, Weiwei; Kluyver, Thomas; Carey, Rebecca; Fangohr, Hans; Pepper, Ryan Alexander; Vousden, Mark; Beg, Marijan; Cortes-Ortuno, David; Bisotti, Marc-Antonio
    Description

    This dataset provides micromagnetic simulation data collected from a series of computational experiments on the effects of polygonal system shape on the energy of different magnetic states in FeGe. The data here form the results of the study ‘Skyrmion states in thin confined polygonal nanostructures.’ The dataset is split into several directories: Data square-samples and triangle-samples These directories contain final state ‘relaxed’ magnetization fields for square and triangle samples respectively. The files within are organised into directories such that a sample of side length d = 40nm and which was subjected to an applied field of 500mT is labelled d40b500. Within each directory are twelve VTK unstructured grid format files (with file extension “.vtu”). These can be viewed in a variety of programmes; as of the time of writing we recommend either ParaView or MayaVi. The twelve files correspond to twelve simulations for each sample simulated, corresponding to twelve states from which each sample was relaxed - these are described in the paper which this dataset accompanies, but we note the labels are: ‘0’, ‘1’, ‘2’, ‘3’, ‘4’, ‘h’, ‘u’, ‘r1’, ‘r2’, ‘r3’, ‘h2’, ‘h3’ where: 0 - 4 are incomplete to overcomplete skyrmions, h, h2 and h3 are helical states with different periodicities r1-r3 are different random states u is the uniform magnetisation The vtu files are labelled according to parameters used in the simulation. For example, a file labelled ‘160_10_3_0_u_wd000000.vtu’ encodes that: The simulation was of a sample with side length 160nm. The simulation was of a sample of thickness 10nm. The maximum length of an edge in the finite element mesh of the sample was 3nm. The system was relaxed from the ‘u’. ‘wd’ encodes that the simulation was performed with a full demagnetizing calculation. square-npys and triangle-npys These directories contain computed information about each of the final states stored in square-samples and triangle-samples. This information is stored in NumPy npz files, and can be read in Python straightforwardly using the function numpy.load. Within each npz file, there are 8 arrays, each with 12 elements. These arrays are: ‘E’ - corresponds to the total energy of the relaxed state. ‘E_exchange’ - corresponds to the Exchange energy of the relaxed state. ‘E_demag’ - corresponds to the Demagnetizing energy of the relaxed state. ‘E_dmi’ - corresponds to the Dzyaloshinskii-Moriya energy of the relaxed state. ‘E_zeeman’ - corresponds to the Zeeman energy of the relaxed state. ‘S’ - Calculated Skyrmion number of the relaxed state. ‘S_abs’ - Calculated absolute Skyrmion number - see paper for calculation details. ‘m_av’ - Computed normalised average magnetisation in x, y, and z directions for relaxed state The twelve elements here correspond to the aforementioned twelve states relaxed from, and the ordering of the array is that of the order given above. square-classified and triangle-classified These directories contain a labelled dataset which gives details about what the final state in each simulation is. The files are stored as plain text, and are labelled with the following structure (the meanings of which are defined in the paper which this dataset accompanies): iSk - Incomplete Skyrmion Sk, or a number n followed by Sk - n Skyrmions in the state. He - A helical state Target - A target state. The files contain the names of png files which are generated from the vtu files in the format ‘d_165b_350_2.png’. This example, if found in the ‘Sk.txt’ file, means that the sample which was 165nm in side length and which was relaxed under a field of 350mT from initial state 2 was found at equilibrium in a Skyrmion state. Figures square-pngs and triangle-pngs These directories contain generated pngs from the vtu files. These are included for convenience as they take several hours to generate. Each directory contains three subdirectories: all-states This directory contains the simulation results from all samples, in the format ‘d_165b_350_2.png’, which means that the image contained here is that of the 165nm side length sample relaxed under a 350mT field from initial state 2. ground-state This directory contains the images which correspond to the lowest energy state found from all of the initial states. These are labelled as ‘d_180b_50.png’, such that the image contained in this file is the the lowest energy state found from all twelve simulations of the 180nm sidelength under a 50mT field. uniform-state This directory contains the images which correspond to the states relaxed only from the uniform state. These are labelled such that an image labelled ‘d_55b_100.png’ is the state found from relaxing a 180nm sample under a 100mT applied field. phase-diagrams These are the generated phase diagrams which are found in the paper. scripts This folder contains Python scripts which generate the png files mentioned above, and also the phase diagram figures for the paper this dataset accompanies. The scripts are labelled descriptively with what they do - for e.g. ’triangle-generate-png-all-states.py’ contains the script which loads vtu files and generates the png files. The exception here is ’render.py’ which provides functions used across multiple scripts. These scripts can be modified - for example; the function 'export_vector_field' has many options which can be adjusted to, for example, plot different components of the magnetization. In order to run the scripts reproducibly, in the root directory we have provided a Makefile which builds each component. In order to reproduce the figures yourself, on a Linux system, ParaView must be installed. The Makefile has been tested on Ubuntu 16.04 with ParaView 5.0.1. In addition, a number of Python dependencies must also be installed. These are: scipy >=0.19.1 numpy >= 1.11.0 matplotlib == 1.5.2 pillow>=3.1.2 We have included a requirements.txt file which specifies these dependencies; they can be installed by running 'pip install -r requirements.txt' from the directory. Once all dependencies are installed, simply run the command ‘make’ from the shell to build the Docker image and generate the figures. Note the scripts will take a long time to run - at the time of writing the runtime will be on the order of several hours on a high-specification desktop machine. For convenience, we have therefore included the generated figures within the repository (as noted above). It should be noted that for the versions used in the paper, adjustments have been made after the generation of the figures, (for e.g. to add images of states within the metastability figure, and overlaying boundaries in the phase diagrams). If you want to reproduce only the phase diagrams, and not the pngs, the command ‘make phase-diagrams’ will do so. This is the smallest part of the figure reproduction, and takes around 5 minutes on a high-specification desktop.

  5. u

    Gaussian Process kernels comparison - Datasets and python code

    • figshare.unimelb.edu.au
    bin
    Updated Jun 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiabo Lu; Niels Fraehr; QJ Wang; Xiaohua Xiang; Xiaoling Wu (2024). Gaussian Process kernels comparison - Datasets and python code [Dataset]. http://doi.org/10.26188/26087719.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 24, 2024
    Dataset provided by
    The University of Melbourne
    Authors
    Jiabo Lu; Niels Fraehr; QJ Wang; Xiaohua Xiang; Xiaoling Wu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    OverviewData used for publication in "Comparing Gaussian Process Kernels Used in LSG Models for Flood Inundation Predictions". We investigate the impact of 13 Gaussian Process (GP) kernels, consisting of five single kernels and eight composite kernels, on the prediction accuracy and computational efficiency of the Low-fidelity, Spatial analysis, and Gaussian process learning (LSG) modelling approach. The GP kernels are compared for three distinct case studies namely Carlisle (United Kingdom), Chowilla floodplain (Australia), and Burnett River (Australia). The high- and low-fidelity model simulation results are obtained from the data repository Fraehr, N. (2024, January 19). Surrogate flood model comparison - Datasets and python code (Version 1). The University of Melbourne. https://doi.org/10.26188/24312658.v1.Dataset structureThe dataset is structured in 5 file folders:CarlisleChowillaBurnettRVComparison_resultsPython_dataThe first three folders contain simulation data and analysis codes. The "Comparison_results" folder contains plotting codes, figures and tables for comparison results. The "Python_data" folder contains LSG model functions and Python environment requirement.Carlisle, Chowilla, and BurnettRVThese files contain high- and low-fidelity hydrodynamic modelling data for training and validation for each individual case study, as well as specific Python scripts for training and running the LSG model with different GP kernels in each case study. There are only small differences between each folder, depending on the hydrodynamic model simulation results and EOF analysis results.Each case study file has the following folders:Geometry_dataDEM files.npz files containing of the high-fidelity models grid (XYZ-coordinates) and areas (Same data is available for the low-fidelity model used in the LSG model).shp files indicating location of boundaries and main flow pathsXXX_modeldataFolder to storage trained model data for each XXX kernel LSG model. For example, EXP_modeldata contains files used to store the trainined LSG model using exponential Gaussian Process kernel.ME3LIN means ME3 + LIN. ME3mLIN means ME3 x LIN.EXPLow mean inducing points percentage for Sparse GP is 5%.EXPMid mean inducing points percentage for Sparse GP is 15%.EXPHigh mean inducing points percentage for Sparse GP is 35%.EXPFULL mean inducing points percentage for Sparse GP is 100%.HD_model_dataHigh-fidelity simulation results for all flood events of that case studyLow-fidelity simulation results for all flood events of that case studyAll boundary input conditionsHF_EOF_analysisStoring of data used in the EOF analysis for the LSG model.Results_dataStoring results of running the evaluation of the LSG models with different GP kernel candidates.Train_test_split_dataThe train-test-validation data split is the same for all LSG models with different GP kernel candidates. The specific split for each cross-validation fold is stored in this folder.YYY_event_summary.csv, YYY_Extrap_event_summary.csvFiles containing overview of all events, and which events are connected between the low- and high-fidelity models for each YYY case study.EOF_analysis_HFdata_preprocessing.py, EOF_analysis_HFdata.pyPreprocessing before EOF analysis and the EOF analysis of the high-fidelity data.Evaluation.py, Evaluation_extrap.pyScripts for evaluating the LSG model for that case study and saving the results for each cross-validation fold.train_test_split.pyScript for splitting the flood datasets for each cross-validation fold, so all LSG models with different GP kernel candidates train on the same data.XXX_training.pyScript for training each LSG model using the XXX GP kernel.ME3LIN means ME3 + LIN. ME3mLIN means ME3 x LIN.EXPLow mean inducing points percentage for Sparse GP is 5%.EXPMid mean inducing points percentage for Sparse GP is 15%.EXPHigh mean inducing points percentage for Sparse GP is 35%.EXPFULL mean inducing points percentage for Sparse GP is 100%.XXX_training.batBatch scripts for training all LSG models using different GP kernel candidates.Comparison_resultsFiles used for comparing LSG models using different GP kernel candidates and generate the figures in the paper "Comparing Gaussian Process Kernels Used in LSG Models for Flood Inundation Predictions". Figures are also included.Python_dataFolder containing Python script with utility functions for setting up, training, and running the LSG models, as well as for evaluating the LSG models. Python environmentThis folder also contains two python environment file with all Python package versions and dependencies. You can install CPU version or GPU version of environment. GPU version environment can use GPU to speed up the GPflow training process. It will install cuda and CUDnn package.You can choose to install environment online or offline. Offline installation reduces dependency issues, but it requires that you also use the same Windows 10 operating system as I do.Online installationLSG_CPU_environment.yml: python environment for running LSG models using CPU of the computerLSG_GPU_environment.yml: python environment for running LSG models using GPU of the computer, mainly using GPU to speed up the GPflow training process. It need to install cuda and CUDnn package.In the directory where the .yml file is located, use the console to enter the following commandconda env create -f LSG_CPU_environment.yml -n myenv_nameorconda env create -f LSG_GPU_environment.yml -n myenv_nameOffline installationIf you also use Windows 10 system as I do, you can directly unzip environment packed by conda-pack.LSG_CPU.tar.gz: Zip file containing all packages in the virtual environment for CPU onlyLSG_GPU.tar.gz: Zip file containing all packages in the virtual environment for GPU accelerationIn Windows system, create a new LSG_CPU or LSG_GPU folder in the Anaconda environment folder and extract the packaged LSG_CPU.tar.gz or LSG_GPU.tar.gz file into that folder.tar -xzvf LSG_CPU.tar.gz -C ./LSG_CPUortar -xzvf LSG_GPU.tar.gz -C ./LSG_GPUAccess to the environment pathcd ./LSG_GPUactivation environment.\Scripts\activate.batRemove prefixes from the activation environment.\Scripts\conda-unpack.exeExit environment.\Scripts\deactivate.batLSG_mods_and_funcPython scripts for using the LSG model.Evaluation_metrics.pyMetrics used to evaluate the prediction accuracy and computational efficiency of the LSG models.

  6. d

    HUN AWRA-LR Model v01

    • data.gov.au
    • researchdata.edu.au
    • +2more
    Updated Aug 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2023). HUN AWRA-LR Model v01 [Dataset]. https://data.gov.au/dataset/ds-dga-f00fa621-cfea-4ecd-b14a-e1101723d128/details
    Explore at:
    Dataset updated
    Aug 8, 2023
    Dataset provided by
    Bioregional Assessment Program
    Description

    Abstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. This metadata contains data for two models: AWRA-L and AWRA-R. In the Macquarie-Tuggerah-Lake (MTL) coastal subregion, only AWRA-L modelling was conducted; in the Hunter subregion both modelling were conducted and AWRA-L flow outputs provided as model inputs for AWRA-R. AWRA-L The metadata within the dataset contains the workflow, processes, inputs and outputs data. The workflow pptx file under the top folder provides the top level summary of the modelling framework, including three slides. The first slide explains how to generate global definition file; the second slide outlines the calibration and simulation for AWRA-L model run; the third slide shows AWRA-L model post-processing for getting streamflow under baseline and coal mine resources development. The exectable model framework is under the Application subfolder Other subfolders, including model calibration, model simulation, post processing, contain the associated files used for model calibration, simulation and post processing, respectively. Documentation about the implementation of AWRA-L in the Hunter bioregion is provided in BA HUN 2.6.1.3 and 2.6.1.4 products. AWRA-R The metadata within the dataset contains the workflow, processes, input and output data and instructions to implement the Hunter AWRA-R model for model calibration or simulation. Each sub-folder in the associated data has a readme file indicating folder contents and providing general instructions about the workflow performed. Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2 Documentation about the implementation of AWRA-R in the Hunter bioregion is provided in BA HUN 2.6.1.3 and 2.6.1.4 products. Purpose BA surface water modelling in the Hunter bioregion Dataset History There are two sections, the first section describing AWRA-L, the second section describing AWRA-R. Section 1 - AWRA-L The directories within contain the input and output data of the Hunter AWRA-L model for model calibration, simulation and post-processing. The calibration folder contains the input and output subfolders used for two model calibration schemes: lowflow and normal. The lowflow model calibration puts more weight on median and low streamflow; the normal model calibration puts more weight on high streamflow. The simulation folder contains only one replicate of model input and output as an example. The post-processing folder contains three subfolders: inputs, outputs and scripts used for generating streamflow under the baseline and coal mine resources development conditions. It contains the post-processing for the two subregions (MTL and HUN). In the MTL coastal subregion, the AWRA-L postprocessing results were final outputs, while in the HUN subregion AWRA-L flow outputs were model inputs for AWRA-R (Details below). Input and output files are the daily data covering the period of 1953 to 2102, with the first 30 years (1953-1982) for model spin-up. Documentation about the implementation of AWRA-L in the Hunter bioregion is provided in BA HUN 2.6.1.3 and 2.6.1.4 products. Data details are below Model calibrations Climate forcings are under '... AWRAL_Metadata\model calibration\inputs\Climate' Lowflow calibration data including catchment location, global definition mapping, objective definition and optimiser definition under '... AWRAL_Metadata\model calibration\inputs\lowflow' Higflow calibration data including catchment location, global definition mapping, objective definition and optimiser definition under '... AWRAL_Metadata\model calibration\inputs ormal' Observed streamflow data used for model calibrations are under '... AWRAL_Metadata\model calibration\inputs\Streamflow' Model simulations Climate forcings are under '... AWRAL_Metadata\model simulation\inputs\Climate' Global definition file used in csv output mode data is under '... AWRAL_Metadata\model simulation\inputs\csv_Model_1' Global definition file used in netcdf output mode data is under '... AWRAL_Metadata\model simulation\inputs\Netcdf_Model_1' Output files used in csv output mode data contain Dd, dgw, E0, Qg, Qtot, Rain, Sg outputs, which is used for AWRA-R model input and is under '... AWRAL_Metadata\model simulation\outputs\csv_Model_1' Output files used in netcdf output mode data contain Qg and Qtot outputs, which is used for AWRA-L postprocessing and is under '... AWRAL_Metadata\model simulation\outputs\Netcdf_Model_1' Post-processing Input data include AWRA-L streamflow, ground water baseflow input and mine footprint data, stored at '... AWRAL_Metadata\post processing\Inputs' Output data include streamflow outputs under crdp and baseline for the HUN and MTL subregions, stored at '... AWRAL_Metadata\post processing\Outputs' Scripts for use for post-processing AWRA-L streamflow and ground water baseflow, is under '... AWRAL_Metadata\model simulation\post processing\Scripts' Section 2 - AWRA-R The directories within contain the input data and outputs of the Hunter AWRA-R model for model calibration or simulation. The folders were calibration data stored is used as an example, simulation uses mirror files of these data, albeit with longer time-series depending on the simualtion period. Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2 Documentation about the implementation of AWRA-R in the Hunter bioregion is provided in BA HUN 2.6.1.3 and 2.6.1.4 products. Additional data needed to generate some of the inputs needed to implement AWRA-R are detailed in the corresponding metadata statement as stated below. Input data needed: Gauge/node topological information in '...\model calibration\HUN4_low\gis\sites\AWRARv5.00_reaches.csv'. Look up table for soil thickness in '...\model calibration\HUN4_low\gis\ASRIS_soil_properties\HUN_AWRAR_ASRIS_soil_thickness_v5.00.csv'. (check metadata statement) Look up tables of AWRA-LG groundwater parameters in '...\model calibration\HUN4_low\gis\AWRA-LG_gw_parameters'. Look up table of AWRA-LG catchment grid cell contribution in '...model calibration\HUN4_low\gis\catchment-boundary\AWRA-R_catchment_x_AWRA-L_weight.csv'. (check metadata statement) Look up tables of link lengths for main river, tributaries and distributaries within a reach in \model calibration\HUN4_low\gis\rivers'. (check metadata statement) Time series data of AWRA-LG outputs: evaporation, rainfall, runoff and depth to groundwater. Gridded data of AWRA-LG groundwater parameters, refer to explanation in '...'\model calibration\HUN4_low\rawdata\AWRA_LG_output\gw_parameters\README.txt'. Time series of observed or simulated reservoir level, volume and surface area for reservoirs used in the simulation: Glenbawn Dam and Glennies Creek Dam. located in '...\model calibration\HUN4_low\rawdata\reservoirs'. Gauge station cross sections in '...\model calibration\HUN4_low\rawdata\Site_Station_Sections'. (check metadata statement) Daily Streamflow and level time-series in'...\model calibration\HUN4_low\rawdata\streamflow_and_level_all_processed'. Irrigation input, configuration and parameter files in '...\AWRAR_Metadata\model calibration\HUN4_low\inputs\HUN\irrigation'. These come from the separate calibration of the AWRA-R irrigation module in: '...\irrigation calibration', refer to explanation in readme.txt file therein. Dam simulation script '\AWRAR_Metadata\dam model calibration simulation\scripts\Hunter_dam_run_2.R' and configuration files in '\AWRAR_Metadata\dam model calibration simulation\scripts\Hunter_dam_config_2.csv'. The config file comes from a separate calibration of AWRA-R dam module in '\AWRAR_Metadata\dam model calibration simulation', refer to the explanation in the readme.txt file therein Relevant ouputs include: AWRA-R time series of stores and fluxes in river reaches ('...\AWRAR_Metadata\model calibration\HUN4_low\outputs\jointcalibration\v01\HUN\simulations') including simulated streamflow in files denoted XXXXXX_full_period_states_nonrouting.csv where XXXXXX denotes gauge or node ID. AWRA-R time series of stores and fluxes for irrigation/mining in the same directory as above in files XXXXXX_irrigation_states.csv AWRA-R calibration validation goodness of fit metrics ('...\AWRAR_Metadata\model calibration\HUN4_low\outputs\jointcalibration\v01\HUN\postprocessing') in files calval_results_XXXXXX_v5.00.csv Dataset Citation Bioregional Assessment Programme (XXXX) HUN AWRA-LR Model v01. Bioregional Assessment Derived Dataset. Viewed 13 March 2019, http://data.bioregionalassessments.gov.au/dataset/670de516-30c5-4724-bd76-8ff4a42ca7a5. Dataset Ancestors Derived From Hunter River Salinity Scheme Discharge NSW EPA 2006-2012 Derived From River Styles Spatial Layer for New South Wales Derived From HUN AWRA-L simulation nodes_v01 Derived From HUN AWRA-R River Reaches Simulation v01 Derived From HUN AWRA-R simulation nodes v01 Derived From Bioregional Assessment areas v06 Derived From GEODATA 9 second DEM and D8: Digital Elevation Model Version 3 and Flow Direction Grid 2008 Derived From Bioregional Assessment areas v04 Derived From Gippsland Project boundary Derived From Natural Resource Management (NRM) Regions 2010 Derived From BA All Regions BILO cells in subregions shapefile Derived From Hunter Surface Water data v2 20140724 Derived From Bioregional Assessment areas v01 Derived

  7. d

    Myoelectric & Simulated Prosthesis Data

    • search.dataone.org
    • borealisdata.ca
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Williams, Heather (2023). Myoelectric & Simulated Prosthesis Data [Dataset]. https://search.dataone.org/view/sha256%3A7ab91dc8595a95afd52441da0dbf8db44ee477aa159f1107ef8080c9ad553607
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Williams, Heather
    Description

    Myoelectric and simulated prosthesis user data data for the Pasta Box Transfer Task. Matlab structures containing means and standard deviations for each participant, as well as overall means and standard deviations are included for each task.

  8. n

    Chapter 3 of the Working Group I Contribution to the IPCC Sixth Assessment...

    • data-search.nerc.ac.uk
    Updated Nov 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Chapter 3 of the Working Group I Contribution to the IPCC Sixth Assessment Report - data for Figure 3.39 (v20220614) [Dataset]. https://data-search.nerc.ac.uk/geonetwork/srv/search?keyword=IPCC
    Explore at:
    Dataset updated
    Nov 21, 2021
    Description

    Data for Figure 3.39 from Chapter 3 of the Working Group I (WGI) Contribution to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report (AR6). Figure 3.39 shows the observed and simulated Pacific Decadal Variability (PDV). --------------------------------------------------- How to cite this dataset --------------------------------------------------- When citing this dataset, please include both the data citation below (under 'Citable as') and the following citation for the report component from which the figure originates: Eyring, V., N.P. Gillett, K.M. Achuta Rao, R. Barimalala, M. Barreiro Parrillo, N. Bellouin, C. Cassou, P.J. Durack, Y. Kosaka, S. McGregor, S. Min, O. Morgenstern, and Y. Sun, 2021: Human Influence on the Climate System. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 423–552, doi:10.1017/9781009157896.005. --------------------------------------------------- Figure subpanels --------------------------------------------------- The figure has six panels. Files are not separated according to the panels. --------------------------------------------------- List of data provided --------------------------------------------------- pdv.obs.nc contains - Observed SST anomalies associated with the PDV pattern - Observed PDV index time series (unfiltered) - Observed PDV index time series (low-pass filtered) - Taylor statistics of the observed PDV patterns - Statistical significance of the observed SST anomalies associated with the PDV pattern pdv.hist.cmip6.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP6 historical simulations. pdv.hist.cmip5.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP5 historical simulations. pdv.piControl.cmip6.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP6 piControl simulations. pdv.piControl.cmip5.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP5 piControl simulations. --------------------------------------------------- Data provided in relation to figure --------------------------------------------------- Panel a: - ipo_pattern_obs_ref in pdv.obs.nc: shading - ipo_pattern_obs_signif (dataset = 1) in pdv.obs.nc: cross markers Panel b: - Multimodel ensemble mean of ipo_model_pattern in pdv.hist.cmip6.nc: shading, with their sign agreement for hatching Panel c: - tay_stats (stat = 0, 1) in pdv.obs.nc: black dots - tay_stats (stat = 0, 1) in pdv.hist.cmip6.nc: red crosses, and their multimodel ensemble mean for the red dot - tay_stats (stat = 0, 1) in pdv.hist.cmip5.nc: blue crosses, and their multimodel ensemble mean for the blue dot Panel d: - Lag-1 autocorrelation of tpi in pdv.obs.nc: black horizontal lines in left . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.piControl.cmip5.nc: blue open box-whisker in the left - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.piControl.cmip6.nc: red open box-whisker in the left - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.hist.cmip5.nc: blue filled box-whisker in the left - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.hist.cmip6.nc: red filled box-whisker in the left - Lag-10 autocorrelation of tpi_lp in pdv.obs.nc: black horizontal lines in right . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.piControl.cmip5.nc: blue open box-whisker in the right - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.piControl.cmip6.nc: red open box-whisker in the right - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.hist.cmip5.nc: blue filled box-whisker in the right - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.hist.cmip6.nc: red filled box-whisker in the right Panel e: - Standard deviation of tpi in pdv.obs.nc: black horizontal lines in left . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.piControl.cmip5.nc: blue open box-whisker in the left - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.piControl.cmip6.nc: red open box-whisker in the left - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.hist.cmip5.nc: blue filled box-whisker in the left - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.hist.cmip6.nc: red filled box-whisker in the left - Standard deviation of tpi_lp in pdv.obs.nc: black horizontal lines in right . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.piControl.cmip5.nc: blue open box-whisker in the right - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.piControl.cmip6.nc: red open box-whisker in the right - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.hist.cmip5.nc: blue filled box-whisker in the right - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.hist.cmip6.nc: red filled box-whisker in the right Panel f: - tpi_lp in pdv.obs.nc: black curves . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - tpi_lp in pdv.hist.cmip6.nc: 5th-95th percentiles in red shading, multimodel ensemble mean and its 5-95% confidence interval for red curves - tpi_lp in pdv.hist.cmip5.nc: 5th-95th percentiles in blue shading, multimodel ensemble mean for blue curve CMIP5 is the fifth phase of the Coupled Model Intercomparison Project. CMIP6 is the sixth phase of the Coupled Model Intercomparison Project. SST stands for Sea Surface Temperature. --------------------------------------------------- Notes on reproducing the figure from the provided data --------------------------------------------------- Multimodel ensemble means and percentiles of historical simulations of CMIP5 and CMIP6 are calculated after weighting individual members with the inverse of the ensemble size of the same model. ensemble_assign in each file provides the model number to which each ensemble member belongs. This weighting does not apply to the sign agreement calculation. piControl simulations from CMIP5 and CMIP6 consist of a single member from each model, so the weighting is not applied. Multimodel ensemble means of the pattern correlation in Taylor statistics in (c) and the autocorrelation of the index in (d) are calculated via Fisher z-transformation and back transformation. --------------------------------------------------- Sources of additional information --------------------------------------------------- The following weblinks are provided in the Related Documents section of this catalogue record: - Link to the report component containing the figure (Chapter 3) - Link to the Supplementary Material for Chapter 3, which contains details on the input data used in Table 3.SM.1 - Link to the code for the figure, archived on Zenodo - Link to the figure on the IPCC AR6 website

  9. n

    Chapter 3 of the Working Group I Contribution to the IPCC Sixth Assessment...

    • data-search.nerc.ac.uk
    • catalogue.ceda.ac.uk
    Updated Oct 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Chapter 3 of the Working Group I Contribution to the IPCC Sixth Assessment Report - data for Figure 3.21 (v20220613) [Dataset]. https://data-search.nerc.ac.uk/geonetwork/srv/search?keyword=AR6
    Explore at:
    Dataset updated
    Oct 4, 2023
    Description

    Data for Figure 3.21 from Chapter 3 of the Working Group I (WGI) Contribution to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report (AR6). Figure 3.21 shows the seasonal evolution of observed and simulated Arctic and Antarctic sea ice area (SIA) over 1979-2017. --------------------------------------------------- How to cite this dataset --------------------------------------------------- When citing this dataset, please include both the data citation below (under 'Citable as') and the following citation for the report component from which the figure originates: Eyring, V., N.P. Gillett, K.M. Achuta Rao, R. Barimalala, M. Barreiro Parrillo, N. Bellouin, C. Cassou, P.J. Durack, Y. Kosaka, S. McGregor, S. Min, O. Morgenstern, and Y. Sun, 2021: Human Influence on the Climate System. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 423–552, doi:10.1017/9781009157896.005. --------------------------------------------------- Figure subpanels --------------------------------------------------- The figure has several subplots, but they are unidentified, so the data is stored in the parent directory. --------------------------------------------------- List of data provided --------------------------------------------------- This dataset contains Sea Ice Area anomalies over 1979-2017 relative to the 1979-2000 means from: - Observations (OSISAF, NASA Team, and Bootstrap) - Historical simulations from CMIP5 and CMIP6 multi-model means - Natural only simulations from CMIP5 and CMIP6 multi-model means --------------------------------------------------- Data provided in relation to figure --------------------------------------------------- - arctic files are used for the plots on the left side of the figure - antarctic files are used for the plots on the right side of the figure - _OBS_NASATeam files are used for the first row of the plot - _OBS_Bootstrap are used for the second row of the plot - _OBS_OSISAF are used for the third row of the plot - _ALL_CMIP5 are used in the fourth row of the plot - _ALL_CMIP6 are used in the fifth row of the plot - _NAT_CMIP5 are used in the sixth row of the plot - _NAT_CMIP6 are used in the seventh row of the plot --------------------------------------------------- Notes on reproducing the figure from the provided data --------------------------------------------------- The significance are for the grey dots, it's nan or 1 values. The data has to be overplotted to colored squares. Grey dots indicate multi-model mean anomalies stronger than inter-model spread (beyond ± 1 standard deviation). The coordinates of the data are indices, but in global attributes 'comments' of each file there are relations of indices to months, since months are the y coordinate. --------------------------------------------------- Sources of additional information --------------------------------------------------- The following weblinks are provided in the Related Documents section of this catalogue record: - Link to the report component containing the figure (Chapter 3) - Link to the Supplementary Material for Chapter 3, which contains details on the input data used in Table 3.SM.1 - Link to the code for the figure, archived on Zenodo.

  10. Z

    Acoustic Emission dataset for impact localization: numerical and...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Donati, Giacomo (2024). Acoustic Emission dataset for impact localization: numerical and experimental case studies [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10875041
    Explore at:
    Dataset updated
    Jun 9, 2024
    Dataset provided by
    Zonzini, Federica
    De Marchi, Luca
    Donati, Giacomo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Acoustic Emission dataset for Defect Detection in Aluminum plates

    Simulated data

    File name: Simulation.zip

    Simulated AE signals based on a ray-tracing algorithm taking into consideration reflections with the mechanical boundaries of the medium (reflection up to the 4th order), which corresponds to a 1x1x0.003 m square aluminum plate.

    Each signal has been created by simulating the propagation between a transmitter (Tx, index from 1 to 40) and a Receiver (Rx, index from 1 to 25), grouped by Tx position and saved as a .mat struct ('data') containing the following fields:

    data.Rx = 2 x 25 matrix containing in the first and second row the x and y axis of the Rx position

    data.Rx = 2 x 25 matrix containing in the first and second row the x and y axis of the Tx position

    data.data = 8000 x 25 matrix containing the transmitted, propagated AE signal from Tx to Rx organized by column. Each AE instance constitutes of 8000 samples acquired at a sampling frequency of 2 MHz (indicated in the file name), one for each Rx given that Tx position.

    data.Label = 1x25 vector containing the ToA labels associated with each of the 25 Tx-rx pairs (per Tx position) computed by means of the Akaike Information Criterion.

    Experimental data

    File name: Test_x0.xx_y0.yyFs2MHz_1x1x0.003_Al.csv

    Experimental data collected with custom AE instrumentation as described in Ref 1.

    One single file is a collection of 3 tests (three repetitions of the impact event at the same position), each of them containing three signals acquired simultaneously by three sensors located in proximity of three corners of a 1x1x0.003 aluminum plate having the same geometrical and numerical characteristics of the numerical one. The sensors acquire 5000 samples at a rate of 2 MHz (indicated in the file name) with a pre-trigger window of 1500 samples. The specific coordinates of the sensors are:

    s1 [x = 0.05, y = 0.95] m

    s2 [x = 0.05, y = 0.05] m

    s3 [x = 0.95, y = 0.05] m

    There are 9 files associated with as many impact positions, indicated by the "x0.xx_y0.yy" entry in the file name, with 0.xx and 0.yy corresponding to the x and y coordinate, respectively. Excitation has been provided by means of a waveform generator exciting a 3-cycle sinusoidal wave with central frequency of 250 kHz. More deatils about the electronics and the full setup are provided in the same reference above.

    This research work has been carried out within the Intelligent Sensor Systems Lab@University of Bologna, Italy.

    For any needs, warning or curiosities, please contact federica.zonzini@unibo.it

  11. d

    Hunter AWRA Hydrological Response Variables (HRV)

    • data.gov.au
    • demo.dev.magda.io
    zip
    Updated Jun 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). Hunter AWRA Hydrological Response Variables (HRV) [Dataset]. https://data.gov.au/data/dataset/a84b2431-24e3-4537-ae50-84f4e955ebdc
    Explore at:
    zip(1245038746)Available download formats
    Dataset updated
    Jun 28, 2022
    Dataset authored and provided by
    Bioregional Assessment Program
    License

    Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
    License information was derived automatically

    Description

    Abstract

    The dataset was derived by the Bioregional Assessment Programme from multiple datasets. The source dataset is identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.

    Hydrological Response Variables (HRVs) are the hydrological characteristics of the system that potentially change due to coal resource development. These data refer to the HRVs related to the AWRA L and AWRA R models for the Hunter subregion for the 65 simulation nodes (63 within Hunter basin and 2 within Macquarie-Tuggerah Lake basin). The nine hydrological response variables (AF, P99, FD, IQR, ZFD, P01, LFD, LFS, LLFS) were computed by the AWRA L and AWRA R models under CRDP and baseline conditions, respectively and the ACRD is the difference between the Baseline and CRDP.

    Abbreviation meaning

    AF - the annual streamflow volume (GL/year)

    P01 - the daily streamflow rate at the first percentile (ML/day)

    P01 - the daily streamflow rate at the first percentile (ML/day)

    IQR - the inter-quartile range in daily streamflow (ML/day). That is, the difference between the daily streamflow rate at the 75th percentile and at the 25th percentile.

    LFD - the number of low streamflow days per year. The threshold for low streamflow days is the 10th percentile from the simulated 90-year period (2013 to 2102)

    LFS - the number of low streamflow spells per year (perennial streams only). A spell is defined as a period of contiguous days of streamflow below the 10th percentile threshold

    LLFS - the length (days) of the longest low streamflow spell each year

    P99 - the daily streamflow rate at the 99th percentile (ML/day)

    FD - flood days, the number of days with streamflow greater than the 90th percentile from the simulated 90-year period (2013 to 2102)

    ZFD - Zero flow days

    Purpose

    This is the dataset used for the Hunter 2.6.1 product to evaluate additional coal mine and coal resource development impacts on hydrological response variables at 65 simulation nodes.

    Dataset History

    The HUN AWRA model outputs were used to determine the impacts on the HRVs to produce these data. The nine HRVs (AF, P99, FD, IQR, ZFD, P01, LFD, LFS, LLFS) were computed under CRDP and baseline conditions, respectively. The difference between CRDP and baseline is used for predicting ACRD impacts on hydrological response variables at 65 simulation nodes.

    Dataset Citation

    Bioregional Assessment Programme (2017) Hunter AWRA Hydrological Response Variables (HRV). Bioregional Assessment Derived Dataset. Viewed 13 March 2019, http://data.bioregionalassessments.gov.au/dataset/a84b2431-24e3-4537-ae50-84f4e955ebdc.

    Dataset Ancestors

  12. ECMWF Reanalysis v5

    • ecmwf.int
    application/x-grib
    Updated Dec 31, 1969
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Centre for Medium-Range Weather Forecasts (1969). ECMWF Reanalysis v5 [Dataset]. https://www.ecmwf.int/en/forecasts/dataset/ecmwf-reanalysis-v5
    Explore at:
    application/x-grib(1 datasets)Available download formats
    Dataset updated
    Dec 31, 1969
    Dataset authored and provided by
    European Centre for Medium-Range Weather Forecastshttp://ecmwf.int/
    License

    http://apps.ecmwf.int/datasets/licences/copernicushttp://apps.ecmwf.int/datasets/licences/copernicus

    Description

    land and oceanic climate variables. The data cover the Earth on a 31km grid and resolve the atmosphere using 137 levels from the surface up to a height of 80km. ERA5 includes information about uncertainties for all variables at reduced spatial and temporal resolutions.

  13. d

    GAL AWRA-L Model v01

    • data.gov.au
    • researchdata.edu.au
    • +1more
    zip
    Updated Jun 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). GAL AWRA-L Model v01 [Dataset]. https://data.gov.au/data/dataset/groups/85fb8186-8455-4f58-8253-d065fa79f775
    Explore at:
    zip(4238459939)Available download formats
    Dataset updated
    Jun 28, 2022
    Dataset authored and provided by
    Bioregional Assessment Program
    License

    Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
    License information was derived automatically

    Description

    Abstract

    The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement

    This metadata is about AWRA-L model used in the Galilee surface water numerical modelling.

    The metadata contain the workflow, processes, inputs/outputs data and the script used to process the model outputs. The workflow pptx file under the top folder provides the top level summary of the modelling framework, including three slides. The first slide explains how to generate global definition file; the second slide outlines the calibration and simulation for AWRA-L model run; the third slide shows AWRA-L model post-processing for getting streamflow under baseline and coal mine resources development.

    Other subfolders, including model calibration, model simulation, post processing, contain the associated data used for model calibration, simulation and post processing, respectively.

    Documentation about the implementation of AWRA-L in the Galilee subregion is provided in BA GAL 2.6.1.3 and 2.6.1.4 products.

    Purpose

    BA surface water modelling in the Galilee subregion

    Dataset History

    The directories under this dataset contain the input and output data of the Galilee AWRA-L model for model calibration, simulation and post-processing.

    The calibration folder contains the input and output subfolders used for two model calibration schemes: lowflow and normal. The lowflow model calibration puts more weights on median and low streamflow; the normal model calibration put more weights on high streamflow.

    The simulation folder contains only one replicate of model input and output as an example.

    The post-processing folder contains three subfolders: inputs, outputs and scripts used for generating streamflow under the baseline and coal mine resources development conditions.

    Input and output files are the daily data covering the period of 1953 to 2102, with the first 30 years (1953-1982) for model spin-up.

    Documentation about the implementation of AWRA-L in the Galilee subregion is provided in BA GAL 2.6.1.3 and 2.6.1.4 products.

    Data details are in below

    Model calibrations

    1. Climate forcings are under '... AWRAL_Metadata\model calibration\inputs\Climate\'

    2. Lowflow calibration data including catchment location, global definition mapping, objective definition and optimiser definition under '... AWRAL_Metadata\model calibration\inputs\lowflow\'

    3. Higflow calibration data including catchment location, global definition mapping, objective definition and optimiser definition under '... AWRAL_Metadata\model calibration\inputs ormal\'

    4. Observed streamflow data used for model calibrations are under '... AWRAL_Metadata\model calibration\inputs\Streamflow\'

    Model simulations

    1. Climate forcings are under '... AWRAL_Metadata\model simulation\inputs\Climate\'

    2. Global definition file is under '... AWRAL_Metadata\model simulation\inputs\ AWRAModel_1\'

    3. Output files Qg and Qtot in Netcdf format is under '... AWRAL_Metadata\model simulation\outputs\ AWRAModel_1\'

    4. Output file in csv format for simulated flow at model nodes is under '... AWRAL_Metadata\model simulation\outputs\AWRAModel_1\'

    Post-processing

    1. Input data include AWRA-L streamflow, ground water baseflow input and mine footprint data, stored at '... AWRAL_Metadata\post processing\Inputs\'

    2. Output data include streamflow outputs under crdp and baseline, stored at '... AWRAL_Metadata\post processing\Outputs\'

    3. Scripts for use for post-processing AWRA-L streamflow and ground water baseflow, is under '... AWRAL_Metadata\model simulation\post processing\Scripts\'

    Dataset Citation

    Bioregional Assessment Programme (2016) GAL AWRA-L Model v01. Bioregional Assessment Derived Dataset. Viewed 12 December 2018, http://data.bioregionalassessments.gov.au/dataset/85fb8186-8455-4f58-8253-d065fa79f775.

    Dataset Ancestors

  14. f

    Table3_Transcription start site signal profiling improves transposable...

    • figshare.com
    xlsx
    Updated Jun 13, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natalia Savytska; Peter Heutink; Vikas Bansal (2023). Table3_Transcription start site signal profiling improves transposable element RNA expression analysis at locus-level.XLSX [Dataset]. http://doi.org/10.3389/fgene.2022.1026847.s004
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 13, 2023
    Dataset provided by
    Frontiers
    Authors
    Natalia Savytska; Peter Heutink; Vikas Bansal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The transcriptional activity of Transposable Elements (TEs) has been involved in numerous pathological processes, including neurodegenerative diseases such as amyotrophic lateral sclerosis and frontotemporal lobar degeneration. The TE expression analysis from short-read sequencing technologies is, however, challenging due to the multitude of similar sequences derived from singular TEs subfamilies and the exaptation of TEs within longer coding or non-coding RNAs. Specialised tools have been developed to quantify the expression of TEs that either relies on probabilistic re-distribution of multimapper count fractions or allow for discarding multimappers altogether. Until now, the benchmarking across those tools was largely limited to aggregated expression estimates over whole TEs subfamilies. Here, we compared the performance of recently published tools (SQuIRE, TElocal, SalmonTE) with simplistic quantification strategies (featureCounts in unique, fraction and random modes) at the individual loci level. Using simulated datasets, we examined the false discovery rate and the primary driver of those false positive hits in the optimal quantification strategy. Our findings suggest a high false discovery number that exceeds the total number of correctly recovered active loci for all the quantification strategies, including the best performing tool TElocal. As a remedy, filtering based on the minimum number of read counts or baseMean expression improves the F1 score and decreases the number of false positives. Finally, we demonstrate that additional profiling of Transcription Start Site mapping statistics (using a k-means clustering approach) significantly improves the performance of TElocal while reporting a reliable set of detected and differentially expressed TEs in human simulated RNA-seq data.

  15. d

    Experteninterviews zur Praxis des Simulatoreneinsatzes - 2001 - Dataset -...

    • b2find.dkrz.de
    Updated Oct 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Experteninterviews zur Praxis des Simulatoreneinsatzes - 2001 - Dataset - B2FIND [Dataset]. https://b2find.dkrz.de/dataset/951d228a-b023-523a-a8a1-eb19cbd25a2c
    Explore at:
    Dataset updated
    Oct 24, 2023
    Description

    Since the end of the 1980s, so-called patient simulators have increasingly been used in anaesthesiology for training and research. The concrete design of simulator settings for teaching, learning or research-oriented use has many levels of freedom. Also due to this complexity, many (functional) relationships are still unclear when using the simulator. Until this project was carried out (2001-2003), there were no systematic studies in anaesthesiology with a comparison of the simulator with the simulated work area (operating room, intensive care unit, emergency room, etc.) in terms of fundamental research. This project was initiated in order to systematically explore the field of action of simulator setting for occupational and organizational psychological research. The aim was therefore to check whether an analytical concept developed in preliminary studies to describe anesthesiological courses of action in the operating room (OR) is also suitable for the analysis of courses of action in the simulator and whether the two settings can be compared on the basis of the collected data. The activity structure should be described in particular, taking into account the handling of unexpected events. From the comparative analysis of both settings, the ecological validity of simulator settings was to be analysed by means of various data sources - observation and interviews. In addition to the fundamental research-oriented comparison of the two settings, design recommendations for simulator settings should also be deduced. For the observations, the observation system developed was used to describe the anesthesiological activity and enabled differentiated observations of courses of action in the operating room and in the simulator in comparable laparoscopic operations. The comparison was based on the analysis of the structural composition of the course of action from seven partial actions (communication, observation, measures, documentation, additional activities, miscellaneous, anaesthetist leaves surgery) differentiated according to the anaesthetic phase (initiation of the anaesthetic, middle phase, recovery from the anaesthetic), the setting or type of case (surgical case, routine simulator case, simulator incident) and finally the expertise of the persons involved (interns; assistant doctors). For the analysis of the ecological validity, in addition to the assessment of the "behavioral realsim" on the basis of the observation data, the experience of the situation by the study participants through semi-structured interviews was collected and the current practice of the use of the simulator through semi-structured interviews with simulator operators was collected. Beyond the comparison of the views and actions of the participants in relation to simulator setting, it is important for an ecologically valid design of training and research conditions to maintain the close relationship between the simulated work area and simulation. In this project, this was done by means of socio-technical system analyses in a Swiss hospital. The results could be compared with the results of previous examinations in a German hospital. The research project presented here therefore deals with the possibilities of describing anesthesiological courses of action, both in the operating room and in simulator setting, as well as the comparison of simulator and simulated work area against the background of work practice in the simulated work area. The analyses are used to develop design proposals for simulator setting. Seit dem Ende der 1980ger Jahre werden in der Anästhesiologie verstärkt sog. Patientensimulatoren zum Training und zunehmend in der Forschung eingesetzt. Die konkrete Gestaltung von Simulatorsettings für den lehr-/ lernbezogenen bzw. forschungsorientierten Einsatz hat viele Freiheitsgrade. Auch aufgrund dieser Komplexität, sind viele (Wirk-)zusammenhänge beim Simulatoreinsatz noch ungeklärt. So gab es bis zum Zeitpunkt der Durchführung dieses Projektes (2001-2003) in der Anästhesiologie keine Studien, die sich systematisch im Sinne der Grundlagenforschung mit einem Vergleich des Simulators mit dem simulierten Arbeitsbereich (Operationssaal, Intensivpflegestation, Notaufnahme etc.) widmen. Um die systematische Erschliessung des Handlungsfeldes Simulatorsetting für die Arbeits- und Organisationspsychologische Forschung voranzutreiben, wurde dieses Projekt initiiert. Ziel war es daher, zu prüfen, ob ein in Vorarbeiten entwickeltes Analysekonzept zur Beschreibung anästhesiologischer Handlungsverläufe im Operationssaal (OP) auch für die Analyse von Handlungsverläufen im Simulator tauglich ist und sich die beiden Settings anhand der erhobenen Daten vergleichen lassen. Hierbei sollte die Tätigkeitsstruktur insbesondere unter Berücksichtigung des Umgangs mit unerwarteten Ereignissen beschrieben werden. Aus der vergleichenden Analyse beider Settings sollte die ökologischen Validität von Simulatorsettings mittels verschiedener Datenquellen - Beobachtung und Interviews - analysiert werden. Neben des grundlagenforschungsorientierten Vergleichs der beiden Settings, sollten zudem Gestaltungsempfehlungen für Simulatorsettings abgeleitet werden. Für die Beobachtungen wurde das erarbeitete Beobachtungssystem zur Beschreibung der anästhesiologischen Tätigkeit eingesetzt und ermöglichte differenzierte Betrachtungen von Handlungsverläufen im OP und im Simulator bei vergleichbaren laparaskopischen Operationen. Der Vergleich erfolgte anhand der Analyse der strukturellen Zusammensetzung des Handlungsverlaufes aus sieben Teilhandlungen (Kommunikation, Beobachtung, Massnahmen, Dokumentation, Zusatztätigkeiten, Sonstiges, Anästhesist verlässt OP) differenziert nach der Narkosephase (Einleitung der Narkose, Mittelphase, Ausleitung der Narkose), nach dem Setting bzw. der Fallart (OP-Fall, Simulator-Routinefall, Simulator-Zwischenfall) und schließlich der Expertise der handelnden Personen (ÄrztInnen im Praktikum; AssistenzärztInnen). Für die Analyse der ökologischen Validität wurde neben der Einschätzung des „behavioral realsim" anhand der Beobachtungsdaten auch das Erleben der Situation durch die Studienteilnehmer mittels halbstrukturierter Interviews erhoben und die aktuelle Praxis des Simulatoreinsatzes mittels halbstrukturierte Interviews mit Simulatorbetreibern erhoben. Über den Abgleich der Sicht- und Handlungsweisen der Beteiligten Akteure in Bezug auf das Simulatorsetting hinaus, ist es für eine ökologisch valide Gestaltung von Trainings- und Forschungsbedingungen wichtig, den nahen Bezug zwischen simuliertem Arbeitsbereich und Simulation zu erhalten. Dies erfolgte im vorliegenden Projekt mittels soziotechnischer Systemanalysen in einem schweizerischen Spital. Die Ergebnisse konnten mit den Ergebnissen frührer Untersuchungen in einem deutschen Spital abgeglichen werden. Das hier vorgestellte Forschungsprojekt beschäftigt sich also mit Möglichkeiten der Beschreibung anästhesiologischer Handlungsverläufe, sowohl im Operationssaal als auch im Simulatorsetting, sowie dem Vergleich von Simulator und simuliertem Arbeitsbereich vor dem Hintergrund der Arbeitspraxis im simulierten Arbeitsbereich. Aus den Analysen werden Gestaltungsvorschläge für das Simulatorsetting abgeleitet.

  16. f

    The posterior means for singular values for the BGGE and BGGEE models as a...

    • plos.figshare.com
    xls
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luciano Antonio de Oliveira; Carlos Pereira da Silva; Alessandra Querino da Silva; Cristian Tiago Erazo Mendes; Joel Jorge Nuvunga; Joel Augusto Muniz; Júlio Sílvio de Sousa Bueno Filho; Marcio Balestre (2023). The posterior means for singular values for the BGGE and BGGEE models as a function of the number of bilinear terms (k = 1,2,…, 7). [Dataset]. http://doi.org/10.1371/journal.pone.0256882.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Luciano Antonio de Oliveira; Carlos Pereira da Silva; Alessandra Querino da Silva; Cristian Tiago Erazo Mendes; Joel Jorge Nuvunga; Joel Augusto Muniz; Júlio Sílvio de Sousa Bueno Filho; Marcio Balestre
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The posterior means for singular values for the BGGE and BGGEE models as a function of the number of bilinear terms (k = 1,2,…, 7).

  17. GLDAS Noah Land Surface Model L4 Monthly 0.25 x 0.25 degree V001

    • data.wu.ac.at
    bin
    Updated May 19, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2015). GLDAS Noah Land Surface Model L4 Monthly 0.25 x 0.25 degree V001 [Dataset]. https://data.wu.ac.at/schema/data_gov/OWI0ZDczOTItZmMxOS00YzQ1LWJlZmItZTI0N2RmYmU3OGVl
    Explore at:
    binAvailable download formats
    Dataset updated
    May 19, 2015
    Dataset provided by
    NASAhttp://nasa.gov/
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Area covered
    a4289ac76dbb905f10234d67cbdafb7ca3216409
    Description

    This data set contains a series of land surface parameters simulated from the Noah 2.7.1 model in the Global Land Data Assimilation System (GLDAS). The data are in 0.25 degree resolution and range from 2000 to the present. The temporal resolution is monthly.

              This simulation was forced by combination of NOAA/GDAS atmospheric analysis fields, spatially and temporally disaggregated NOAA Climate Prediction Center Merged Analysis of Precipitation (CMAP) fields, and observation based downward shortwave and longwave radiation fields derived using the method of the Air Force Weather Agency's AGRicultural METeorological modeling system (AGRMET). 
              The simulation was initialized on 1 January 1979 using soil moisture and other state fields from a GLDAS/Noah model climatology for that day of the year.
    
              WGRIB or other GRIB reader is required to read the files. The data set applies a user-defined parameter table to indicate the contents and parameter number. The GRIBTAB file (http://disc.sci.gsfc.nasa.gov/hydrology/grib_tabs/gribtab_GLDAS_NOAH.txt) shows a list of parameters for this data set, along with their Product Definition Section (PDS) IDs and units.
    

    For more information, please see the README Document at ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/GLDAS_V1/README.GLDAS.pdf.

  18. GLDAS CLM Land Surface Model L4 3 hourly 1.0 x 1.0 degree Subsetted V001...

    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    • gimi9.com
    • +3more
    Updated Feb 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.staging.idas-ds1.appdat.jsc.nasa.gov (2025). GLDAS CLM Land Surface Model L4 3 hourly 1.0 x 1.0 degree Subsetted V001 (GLDAS_CLM10SUBP_3H) at GES DISC [Dataset]. https://data.staging.idas-ds1.appdat.jsc.nasa.gov/dataset/gldas-clm-land-surface-model-l4-3-hourly-1-0-x-1-0-degree-subsetted-v001-gldas-clm10subp-3
    Explore at:
    Dataset updated
    Feb 19, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    With the upgraded Land Surface Models (LSMs) and updated forcing data sets, the GLDAS version 2.1 (GLDAS-2.1) production stream serves as a replacement for GLDAS-001. The entire GLDAS-001 collection from January 1979 through March 2020 was decommissioned on June 30, 2020 and removed from the GES DISC system. However, the replacement for GLDAS-001 monthly and 3-hourly 1.0 x 1.0 degree products from CLM Land Surface Model currently are not available yet. Once their replacement data products become available, the DOIs of GLDAS-001 CLM data products will direct to the GLDAS-2.1 CLM data products. This data set contains a series of land surface parameters simulated from the Common Land Model (CLM) V2.0 model in the Global Land Data Assimilation System (GLDAS). The data are in 1.0 degree resolution and range from January 1979 to present. The temporal resolution is 3-hourly. This simulation was forced by a combination of NOAA/GDAS atmospheric analysis fields, spatially and temporally disaggregated NOAA Climate Prediction Center Merged Analysis of Precipitation (CMAP) fields, and observation based downward shortwave and longwave radiation fields derived using the method of the Air Force Weather Agency's AGRicultural METeorological modeling system (AGRMET). The simulation was initialized on 1 January 1979 using soil moisture and other state fields from a GLDAS/CLM model climatology for that day of the year. WGRIB or another GRIB reader is required to read the files. The data set applies a user-defined parameter table to indicate the contents and parameter number. The GRIBTAB file shows a list of parameters for this data set, along with their Product Definition Section (PDS) IDs and units. For more information, please see the README document.

  19. D

    S1EME-02

    • wdc-climate.de
    Updated Dec 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Winterstein, Franziska (2020). S1EME-02 [Dataset]. https://www.wdc-climate.de/ui/entry?acronym=DKRZ_LTA_782_ds00003
    Explore at:
    Dataset updated
    Dec 11, 2020
    Dataset provided by
    World Data Center for Climate (WDCC) at DKRZ
    DKRZ
    Authors
    Winterstein, Franziska
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2020 - Dec 31, 2039
    Area covered
    Description

    Sensitivity simulation of study analysing strongly increased atmospheric methane concentrations with doubled lower boundary condition for methane with respect to condition of 2010 (1.8 ppm).

    Information:

    • Timeslice representing 2010 conditions (years in dataset: 2020 - 2039)
    • All data in monthly means
    • Simulated with MESSy version 2.52, http://www.messy-interface.org

    Corresponding publication:

    Winterstein, F., Tanalski, F., Jöckel, P., Dameris, M., and Ponater,
    M.: Implication of strongly increased atmospheric methane
    concentrations for chemistryclimate connections, Atmos. Chem. Phys.,
    19, 71517163, https://doi.org/10.5194/acp-19-7151-2019, 2019.

    S1EME-02 is called S2 in the publication.

  20. Data from: Detecting and quantifying social transmission using network-based...

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    zip
    Updated Aug 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew Hasenjager; Ellouise Leadbeater; William Hoppitt (2020). Detecting and quantifying social transmission using network-based diffusion analysis [Dataset]. http://doi.org/10.5061/dryad.280gb5mnj
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 21, 2020
    Dataset provided by
    Royal Holloway University of London
    Authors
    Matthew Hasenjager; Ellouise Leadbeater; William Hoppitt
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description
    1. Although social learning capabilities are taxonomically widespread, demonstrating that freely interacting animals (whether wild or captive) rely on social learning has proved remarkably challenging.

    2. Network-based diffusion analysis (NBDA) offers a means for detecting social learning using observational data on freely interacting groups. Its core assumption is that if a target behaviour is socially transmitted, then its spread should follow the connections in a social network that reflects social learning opportunities.

    3. Here, we provide a comprehensive guide for using NBDA. We first introduce its underlying mathematical framework and present the types of questions that NBDA can address. We then guide researchers through the process of: selecting an appropriate social network for their research question; determining which NBDA variant should be used; and incorporating other variables that may impact asocial and social learning. Finally, we discuss how to interpret an NBDA model’s output and provide practical recommendations for model selection.

    4. Throughout, we highlight extensions to the basic NBDA framework, including incorporation of dynamic networks to capture changes in social relationships during a diffusion and using a multi-network NBDA to estimate information flow across multiple types of social relationship.

    5. Alongside this information, we provide worked examples and tutorials demonstrating how to perform analyses using the newly developed NBDA package written in the R programming language.

    Methods We provide tutorials guiding users through several examples illustrating how to carry out network-based diffusion analysis using the NBDA package (https://github.com/whoppitt/NBDA). These tutorials make use of simulated data in the form of social networks and individual-level data (e.g. sex and age), which are provided here

    In addition, the NBDA code and data necessary to replicate the results presented in Box 3 in the main text are also included. These data were collected as part of a larger study examining the relative importance of different social network types in guiding honeybees to novel foraging locations. Two cohorts of honeybees originating from a single colony were simultaneously trained to separate artificial sugar water feeders 100 m from the hive. During the trial, one of these feeders was left empty, while the other continued to provide sucrose. The order in which individuals trained to the former feeder discovered the latter feeder (which they had never previously visited) was recorded. At the same time, all interactions in the hive between honeybees visiting the active feeder and those that had been trained to the now-empty feeder (but had yet to discover the active feeder) were filmed and recorded. For each dance-following interaction, the number of waggle runs an individual followed for the active feeder was recorded. For trophallaxis and antennation, the duration of each interaction was recorded in seconds.

    From these interaction records, dynamic and static social networks were constructed. Static networks for each interaction type aggregated all interactions that occurred between each pair across the entire 2 hr trial. In contrast, dynamic networks updated throughout the trial. For each successful recruitment event, networks updated when that individual left the hive. In one instance, a recruit left the hive, but did not discover the feeder until after another recruit had discovered it; to prevent networks from "rewinding", the update time for the latter individual was used for the former. There were 16 recruitment events in total, meaning networks were updated a total of 15 times. The index numbers indicate the successive updates.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Tom Bailey; Adam Jackson; Razvan-Antonio Berbece; Kejun Wu; Nicole Hondow; Elaine Martin (2023). Gradient Boosted Machine Learning Model to Predict H2, CH4, and CO2 Uptake in Metal–Organic Frameworks Using Experimental Data [Dataset]. http://doi.org/10.1021/acs.jcim.3c00135.s002

Data from: Gradient Boosted Machine Learning Model to Predict H2, CH4, and CO2 Uptake in Metal–Organic Frameworks Using Experimental Data

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Jul 18, 2023
Dataset provided by
ACS Publications
Authors
Tom Bailey; Adam Jackson; Razvan-Antonio Berbece; Kejun Wu; Nicole Hondow; Elaine Martin
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

Predictive screening of metal–organic framework (MOF) materials for their gas uptake properties has been previously limited by using data from a range of simulated sources, meaning the final predictions are dependent on the performance of these original models. In this work, experimental gas uptake data has been used to create a Gradient Boosted Tree model for the prediction of H2, CH4, and CO2 uptake over a range of temperatures and pressures in MOF materials. The descriptors used in this database were obtained from the literature, with no computational modeling needed. This model was repeated 10 times, showing an average R2 of 0.86 and a mean absolute error (MAE) of ±2.88 wt % across the runs. This model will provide gas uptake predictions for a range of gases, temperatures, and pressures as a one-stop solution, with the data provided being based on previous experimental observations in the literature, rather than simulations, which may differ from their real-world results. The objective of this work is to create a machine learning model for the inference of gas uptake in MOFs. The basis of model development is experimental as opposed to simulated data to realize its applications by practitioners. The real-world nature of this research materializes in a focus on the application of algorithms as opposed to the detailed assessment of the algorithms.

Search
Clear search
Close search
Google apps
Main menu