Facebook
TwitterThis is digital research data corresponding to the manuscript, Reinhart, K.O., Vermeire, L.T. Precipitation Manipulation Experiments May Be Confounded by Water Source. J Soil Sci Plant Nutr (2023). https://doi.org/10.1007/s42729-023-01298-0 Files for a 3x2x2 factorial field experiment and water quality data used to create Table 1. Data for the experiment were used for the statistical analysis and generation of summary statistics for Figure 2. Purpose: This study aims to investigate the consequences of performing precipitation manipulation experiments with mineralized water in place of rainwater (i.e. demineralized water). Limited attention has been paid to the effects of water mineralization on plant and soil properties, even when the experiments are in a rainfed context. Methods: We conducted a 6-yr experiment with a gradient in spring rainfall (70, 100, and 130% of ambient). We tested effects of rainfall treatments on plant biomass and six soil properties and interpreted the confounding effects of dissolved solids in irrigation water. Results: Rainfall treatments affected all response variables. Sulfate was the most common dissolved solid in irrigation water and was 41 times more abundant in irrigated (i.e. 130% of ambient) than other plots. Soils of irrigated plots also had elevated iron (16.5 µg × 10 cm-2 × 60-d vs 8.9) and pH (7.0 vs 6.8). The rainfall gradient also had a nonlinear (hump-shaped) effect on plant available phosphorus (P). Plant and microbial biomasses are often limited by and positively associated with available P, suggesting the predicted positive linear relationship between plant biomass and P was confounded by additions of mineralized water. In other words, the unexpected nonlinear relationship was likely driven by components of mineralized irrigation water (i.e. calcium, iron) and/or shifts in soil pH that immobilized P. Conclusions: Our results suggest robust precipitation manipulation experiments should either capture rainwater when possible (or use demineralized water) or consider the confounding effects of mineralized water on plant and soil properties. Resources in this dataset: Resource Title: Readme file- Data dictionary File Name: README.txt Resource Description: File contains data dictionary to accompany data files for a research study. Resource Title: 3x2x2 factorial dataset.csv File Name: 3x2x2 factorial dataset.csv Resource Description: Dataset is for a 3x2x2 factorial field experiment (factors: rainfall variability, mowing seasons, mowing intensity) conducted in northern mixed-grass prairie vegetation in eastern Montana, USA. Data include activity of 5 plant available nutrients, soil pH, and plant biomass metrics. Data from 2018. Resource Title: water quality dataset.csv File Name: water quality dataset.csv Resource Description: Water properties (pH and common dissolved solids) of samples from Yellowstone River collected near Miles City, Montana. Data extracted from Rinella MJ, Muscha JM, Reinhart KO, Petersen MK (2021) Water quality for livestock in northern Great Plains rangelands. Rangeland Ecol. Manage. 75: 29-34.
Facebook
TwitterExperiment 1 means and statistics for age and baseline assessments indicating experimental conditions did not differ.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is for the study of task decomposition effects in time estimation: the role of future boundaries and thought focus, and supplementary materials. Due to the previous research on the impact of task decomposition on time estimation, the role of time factors was often overlooked. For example, with the same decomposition, people subjectively set different time boundaries when facing difficult and easy tasks. Therefore, taking into account the time factor is bound to improve and integrate the research conclusions of decomposition effects. Based on this, we studied the impact of task decomposition and future boundaries on time estimation. Experiment 1 passed 2 (task decomposition/no decomposition) × Design an inter subject experiment with/without future boundaries, using the expected paradigm to measure the time estimation of participants; Experiment 2 further manipulates the time range of future boundaries based on Experiment 1, using 2 (task decomposition/non decomposition) × 3 (future boundaries of longer/shorter/medium range) inter subject experimental design, using expected paradigm to measure time estimation of subjects; On the basis of Experiment 2, Experiment 3 further verified the mechanism of the influence of the time range of future boundaries under decomposition conditions on time estimation. Through a single factor inter subject experimental design, a thinking focus scale was used to measure the thinking focus of participants under longer and shorter boundary conditions. Through the above experiments and measurements, we have obtained the following dataset. Experiment 1 Table Data Column Label Meaning: Task decomposition into grouped variables: 0 represents decomposition; 1 indicates no decomposition The future boundary is a grouping variable: 0 represents existence; 1 means it does not exist Zsco01: Standard score for estimating total task time A logarithm: The logarithmic value of the estimated time for all tasks Experiment 2 Table Data Column Label Meaning: The future boundary is a grouping variable: 7 represents shorter, 8 represents medium, and 9 represents longer The remaining data labels are the same as Experiment 1 Experiment 3 Table Data Column Label Meaning: Zplan represents the standard score for the focus plan score Zbar represents the standard score for attention barriers The future boundary is a grouping variable: 0 represents shorter, 1 represents longer
Facebook
TwitterThe shorthand symbol for the stimulus starts with the C/S/K (for contrast, skew, kurtosis) and is followed by −,−−,+,++ (small magnitude and negative, large magnitude and negative, small magnitude and positive, large magnitude and positive); therefore, C+,C++,S−−,S−,S+,S++,K−−,K−,K+. Parameters in the table denoted in bold were varied in each of the three stimulus categories.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset represents the results of the experimentation of a method for evaluating semantic similarity between concepts in a taxonomy. The method is based on the information-theoretic approach and allows senses of concepts in a given context to be considered. Relevance of senses is calculated in terms of semantic relatedness with the compared concepts. In a previous work [9], the adopted semantic relatedness method was the one described in [10], while in this work we also adopted the ones described in [11], [12], [13], [14], [15], and [16].
We applied our proposal by extending 7 methods for computing semantic similarity in a taxonomy, selected from the literature. The methods considered in the experiment are referred to as R[2], W&P[3], L[4], J&C[5], P&S[6], A[7], and A&M[8]
The experiment was run on the well-known Miller and Charles benchmark dataset [1] for assessing semantic similarity.
The results are organized in seven folders, each with the results related to one of the above semantic relatedness methods. In each folder there is a set of files, each referring to one pair of the Miller and Charles dataset. In fact, for each pair of concepts, all the 28 pairs are considered as possible different contexts.
REFERENCES [1] Miller G.A., Charles W.G. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1). [2] Resnik P. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Int. Joint Conf. on Artificial Intelligence, Montreal. [3] Wu Z., Palmer M. 1994. Verb semantics and lexical selection. 32nd Annual Meeting of the Associations for Computational Linguistics. [4] Lin D. 1998. An Information-Theoretic Definition of Similarity. Int. Conf. on Machine Learning. [5] Jiang J.J., Conrath D.W. 1997. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. Inter. Conf. Research on Computational Linguistics. [6] Pirrò G. 2009. A Semantic Similarity Metric Combining Features and Intrinsic Information Content. Data Knowl. Eng, 68(11). [7] Adhikari A., Dutta B., Dutta A., Mondal D., Singh S. 2018. An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology. J. Assoc. Inf. Sci. Technol. 69(8). [8] Adhikari A., Singh S., Mondal D., Dutta B., Dutta A. 2016. A Novel Information Theoretic Framework for Finding Semantic Similarity in WordNet. CoRR, arXiv:1607.05422, abs/1607.05422. [9] Formica A., Taglino F. 2021. An Enriched Information-Theoretic Definition of Semantic Similarity in a Taxonomy. IEEE Access, vol. 9. [10] Information Content-based approach [Schuhmacher and Ponzetto, 2014]. [11] Linked Data Semantic Distance (LDSD) [Passant, 2010]. [12] Wikipedia Link-based Measure (WLM ) [Witten and Milne, 2008]; [13] Linked Open Data Description Overlap-based approach (LODDO) [Zhou et al. 2012] [14] Exclusivity-based [Hulpuş et al 2015] [15] ASRMP [El Vaigh et al. 2020] [16] LDSDGN [Piao and Breslin, 2016]
Facebook
TwitterThis data collection contains all the data used in our learning question classification experiments, which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features.
ABBR - 'abbreviation': expression abbreviated, etc. DESC - 'description and abstract concepts': manner of an action, description of sth. etc. ENTY - 'entities': animals, colors, events, food, etc. HUM - 'human beings': a group or organization of persons, an individual, etc. LOC - 'locations': cities, countries, etc. NUM - 'numeric values': postcodes, dates, speed,temperature, etc
https://cogcomp.seas.upenn.edu/Data/QA/QC/ https://github.com/Tony607/Keras-Text-Transfer-Learning/blob/master/README.md
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using CAD tools. The experimental design aimed at providing a straight-forward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by employing magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion.
Facebook
TwitterTecplot (ascii) and matlab files are posted here for the Static pressure coefficient data sets. To download all of the data in either tecplot format or matlab format, you can go to https://c3.nasa.gov/dashlink/resources/485/ Please consult the documentation found on this page under Support/Documentation for information regarding variable definition, data processing, etc.
Facebook
TwitterThis item contains data and code used in experiments that produced the results for Sadler et. al (2022) (see below for full reference). We ran five experiments for the analysis, Experiment A, Experiment B, Experiment C, Experiment D, and Experiment AuxIn. Experiment A tested multi-task learning for predicting streamflow with 25 years of training data and using a different model for each of 101 sites. Experiment B tested multi-task learning for predicting streamflow with 25 years of training data and using a single model for all 101 sites. Experiment C tested multi-task learning for predicting streamflow with just 2 years of training data. Experiment D tested multi-task learning for predicting water temperature with over 25 years of training data. Experiment AuxIn used water temperature as an input variable for predicting streamflow. These experiments and their results are described in detail in the WRR paper. Data from a total of 101 sites across the US was used for the experiments. The model input data and streamflow data were from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) dataset (Newman et. al 2014, Addor et. al 2017). The water temperature data were gathered from the National Water Information System (NWIS) (U.S. Geological Survey, 2016). The contents of this item are broken into 13 files or groups of files aggregated into zip files:
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
The Intelligent Building Agents (IBA) project is part of the Embedded Intelligence in Buildings Program in the Engineering Laboratory at the National Institute of Standards and Technology (NIST). A key part of the IBA Project is the IBA Laboratory (IBAL), a unique facility consisting of a mixed system of off the shelf equipment, including chillers and air handling units, controlled by a data acquisition system and capable of supporting building system optimization research under realistic and reproducible operating conditions. The database contains the values of approximately 300 sensors/actuators in the IBAL, including both sensor measurements and control actions, as well as approximately 850 process data, which are typically related to control settings and decisions. Each of the sensors/actuators has associated metadata. The metadata, sensors/actuators, and process data are defined on the "metadata", "sensors", and "parameters" tabs in the definitions file. Data are collected every 10 s. The database contains two dashboards: 1) Experiments - select data from individual experiments and 2) Measurements - select individual sensor/actuator and parameter data. The Experiments Dashboard contains three sections. The "Experiment Data Plot" shows plots of the sensor/actuator data selected in the second section, "Experiment/Metadata". There are plots of both scaled and raw data (see the meta data file for the conversion from raw to scaled data). Underneath the plots is a "Download CSV" button; select that button and a csv file of the data in the plot is automatically generated. In "Experiment/Metadata", first select an "Experiment" from the options in the table on the left. A specific experiment or type of experiment can be found by entering terms in the search box. For example, searching for the word "Charge" will bring up experiments in which the ice thermal storage tank is charged. The table of experiments also includes the duration of the experiment in minutes. Once an experiment is selected, specific sensor/actuator data points can be selected from the "Measurements" table on the right. These data can be filtered by subsystem (e.g., primary loop, secondary loop, Chiller1) and/or measurement type (e.g., pressure, flow, temperature). These data will then be shown in the plots at the top. The final section, "Process", contains the process data, which are shown by the subsystem. These data are not shown in the plots but can be downloaded by selecting the "Download CSV" button in the "Process" section. The Measurements Dashboard contains three sections. The "Date Range" section is used to select the time range of the data. The "All Measurements" section is used to select specific sensor/actuator data. As in the Experiments Dashboard, these data can be filtered by subsystem and/or measurement type. The scaled and raw values of the selected data are then plotted in the "Historical Data Plot" section. The "Download CSV" button underneath the plots will automatically download the selected data.
Facebook
TwitterOpen Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
The Health Index is an Experimental Statistic to measure a broad definition of health, in a way that can be tracked over time and compared between different areas. These data are the provisional results of the Health Index for upper-tier local authorities in England, 2015 to 2018, to illustrate the type of analysis the Health Index can enable.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets used in our experiments.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental results from hydraulic fracturing experiments performed in granite and marble samples of size 30 cm × 30 cm × 45 cm under well-defined boundary conditions. Datasets include: pressure versus flow-rate response acoustic emission data from a dense network of 32 seismic sensors detailed description of the experimental set-up and adopted test protocol mechanical and petrophysical properties of the samples python code for seismic data processing This complete collection of data, obtained within the framework of European Union’s Horizon 2020 project GEMex, is rare in its kind and indispensable for verification of model assumptions and constitutive relationships of numerical codes used for designing field-scale hydraulic fracturing experiments.
Facebook
TwitterAs a subset of the Japanese 55-year Reanalysis (JRA-55) project, an experiment using the global atmospheric model of the JRA-55 was conducted by the Meteorological Research Institute of the Japan Meteorological Agency. The experiment, named the JRA-55AMIP, has been carried out by prescribing the same boundary conditions and radiative forcing of JRA-55, including the historical observed sea surface temperature, sea ice concentration, greenhouse gases, etc., with no use of atmospheric observational data. This project is intended to assess systematic errors of the model.
Facebook
TwitterThis release is for quarters 1 to 4 of 2019 to 2020.
Local authority commissioners and health professionals can use these resources to track how many pregnant women, children and families in their local area have received health promoting reviews at particular points during pregnancy and childhood.
The data and commentaries also show variation at a local, regional and national level. This can help with planning, commissioning and improving local services.
The metrics cover health reviews for pregnant women, children and their families at several stages which are:
Public Health England (PHE) collects the data, which is submitted by local authorities on a voluntary basis.
See health visitor service delivery metrics in the child and maternal health statistics collection to access data for previous years.
Find guidance on using these statistics and other intelligence resources to help you make decisions about the planning and provision of child and maternal health services.
See health visitor service metrics and outcomes definitions from Community Services Dataset (CSDS).
Since publication in November 2020, Lewisham and Leicestershire councils have identified errors in the new birth visits within 14 days data it submitted to Public Health England (PHE) for 2019 to 2020 data. This error has caused a statistically significant change in the health visiting data for 2019 to 2020, and so the Office for Health Improvement and Disparities (OHID) has updated and reissued the data in OHID’s Fingertips tool.
A correction notice has been added to the 2019 to 2020 annual statistical release and statistical commentary but the data has not been altered.
Please consult OHID’s Fingertips tool for corrected data for Lewisham and Leicestershire, the London and East Midlands region, and England.
Facebook
Twitterhttps://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions
This release presents experimental statistics from the Mental Health Services Data Set (MHSDS). This replaces the Mental Health and Learning Disabilities Dataset (MHLDDS). As well as analysis of waiting times this release includes elements of the reports that were previously included in monthly reports produced from final MHLDDS submissions. It also includes some new measures. New measures are noted in the accompanying metadata file. The changes incorporate requirements in support of Children and Young People's Improving Access to Psychological Therapies (CYP IAPT), elements of the Learning Disabilities Census (LDC) and elements of the Assuring Transformation (AT) Information Standard. Information provided in this release therefore covers mental health, learning disability and autism services for all ages ('services'). From January 2016 the release includes information on people in children's and young people's mental health services, including CAMHS, for the first time. Learning disabilities and autism services have been included since September 2014. The expansion in the scope of the dataset means that many of the basic measures in this release now cover a wider set of services. We have introduced service level breakdowns for some measures to provide new information to users, but also, importantly, to provide comparability with key measures that were part of the previous monthly release. Full details of the measures included in this publication can be found in the Further information about this publication section of the executive summary. Because of the scope of the changes to the dataset it will take time to re-introduce all possible measures that were previously part of the MHLDS Monthly Reports. Additional measures will be added to this report in the coming months. Because the dataset is new these measures are currently experimental statistics. We will release the reports as experimental statistics until the characteristics of data flowed using the new data standard are understood. The MHSDS Monthly Data File was updated on 14 February 2017 with a correction to provider level figures for uses of the Mental Health Act, Out of Area Treatment and perinatal mental health activity. The measures affected are: AMH09a, LDA08, LDA09, LDA10, MH08, MH08a, MH09, MH09a, MH09b, MH09c, MH10, MH10a, MH11, MHS08, MHS08a, MHS09, MHS10, MHS11, AMH34a, AMH35a, MHS23a and MHS29a. Full details for these measures can be found in the metadata file which accompanies this publication. A correction has been made to this publication on 10 September 2018. This amendment relates to statistics in the monthly CSV data file; the specific measures effected are listed in the “Corrected Measures” CSV. All listed measures have now been corrected. NHS Digital apologises for any inconvenience caused.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
This is data for only studies with Resistance Training (RT) from the systematic review and meta-anaysis
that looked at the dose–response relationship of the effects of protein intake on lean body mass.
Again, although the authors examined studies with and without RT, this dataset contains only those studies with RT, which is about half of the studies.
The reason for this is 1) personal interest in RT + protein and not only protein and 2) because I had to transcribe the data on LBM responses from the forest plot (Figure S3) by hand.
The data is from Supplementary Tables S2,S3,S4,S5 and the forest plot Figure S3 of the study.
In the table I compiled, each row corresponds to a distinct group in a study, by which I mean a treatment/experimental or control group. In the original supplementary data tables, each row represented a study, not a group.
I have added several columns to the data to facilitate analysis in this format.
The column "Group" with values "Experimental" or "Control", indicates whether the group is an experimental or control group in its study, the study being the column "Author and Year". Some of the studies had multiple experimental groups (e.g. Weinheimer had a 20g,40g, and 60g group along with a control). The experimental groups were kept, but only one control group per study was left to avoid redundancy.
The intake and response features were also altered to fit this format, and this is discussed in the following few sections.
I will begin with the logic behind the "Experimental protein intake (g/kg/day)" and "Control protein intake (g/kg/day)" columns.
In the studies with "Protein intake during intervention (include supplementation) (g/kg/day)" empty, "Experimental protein intake (g/kg/day)" was defined to have the values of the column "Protein intake during intervention (not include supplementation) (g/kg/day)".
On the contrary, with studies with "Protein intake during intervention (include supplementation) (g/kg/day)" nonempty, "Experimental protein intake (g/kg/day)" was defined to be the sum of the columns "Assigned protein amount (g/kg/day)" and "Protein intake during intervention (include supplementation) (g/kg/day)".
This was corroborated by looking at a few of the studies cited in the supplementary table and the authors saying they looked at "total protein intake in each group or the difference in supplemental protein doses between groups".
For instance, for the study cited as Candow (2006) [7] the "(include supplementation)" column is empty. But the values in "(not include supplementation)" exactly match Table 1 in the study.
The study cited as Weinheimer (2012) [26] meanwhile had 4 groups, one control and three experimental groups with additional supplemental protein of 20, 40 and 60 grams per day.
All four groups are cited in the Supplementary Tables as having the same value for the "Protein intake during intervention (include supplementation) (g/kg/day)", 0.93. ON the other hand, the "Assigned protein amount (g/kg/day)" column is 0.23, 0.47 and 0.67. Adding them up yields 1.16, 1.4 and 1.6.
From the study itself we read "Total protein intakes [...] of 0.93, 1.13, 1.43, and 1.63 g/kg/day in the 0-, 20-, 40-, and 60-g/d groups, respectively"
There is a slight discrepancy, which I believe is due to the meta-analysis recalculating intakes and dividing by the mean weight to get relative intakes. The authors of the cited study, meanwhile, had access to individual data, and the discrepancy is probably due to the mean of a ratio not equalling a ratio of means.
Thirdly, "Control protein intake (g/kg/day)" was defined as the difference of the previously defined "Experimental protein intake (g/kg/day)" and the column "Difference in protein amount between groups(g/kg/day)".
Finally, "Protein intake (g/kg/day)" was defined to be equal to either "Experimental energy intake (kcal/kg/day)" or "Control protein intake (g/kg/day)" for "Group" being "Experimental" or "Control" respectively.
Energy intake was defined with the same logic as protein intake.
(N.B. the study authors made a typo and wrote that the energy is in grams, but it is in kilocalories. Therefore, my new columns say "kcal" instead of "g" where appropriate.)
First, "Experimental energy intake (kcal/kg/day)" was defined to be equal to "Energy intake during intervention (not include supplementation) (g/kg/day)" if the column "Energy intake during intervention (include supplementation) (g/kg/day)" was empty.
If the column "Energy intake during intervention (include supplementation) (g...
Facebook
TwitterLocal authority commissioners and health professionals can use these resources to track how many pregnant women, children and families in their local area have received health promoting reviews at particular points during pregnancy and childhood.
The data and commentaries also show variation at a local, regional and national level. This can help with planning, commissioning and improving local services.
The metrics cover health reviews for pregnant women, children and their families at several stages:
Public Health England (PHE) collects the data, which is submitted by local authorities on a voluntary basis.
See health visitor service delivery metrics in the child and maternal health statistics collection to access data for previous years.
Find guidance on using these statistics and other intelligence resources to help you make decisions about the planning and provision of child and maternal health services.
See health visitor service metrics and outcomes definitions from Community Services Dataset (CSDS).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We demonstrate that semantic modeling with ontologies provides a robust and enduring approach to achieving FAIR data in our experimental environment. By endowing data with self‑describing semantics through ontological definitions and inference, we enable them to ‘speak’ for themselves. Building on PaNET, we define techniques in ESRFET by their characteristic building blocks. The outcome is a standards‑based framework (RDF, OWL, SWRL, SPARQL, SHACL) that encodes experimental techniques’ semantics and underpins a broader facility ontology. Our approach illustrates that by using differential definitions, semantic enrichment through linking to multiple ontologies, and documented semantic negotiation, we standardize experimental techniques' descriptions and annotations—ensuring enhanced discoverability, reproducibility, and integration within the FAIR data ecosystem. This talk was held in the course of the DAPHNE4NFDI TA1 Data for science lecture series on April, 29 2025
Facebook
TwitterThe following information and metadata applies to both the Phase I (Hydrodynamics) and Phase II (Full System Power Take-Off) zip folders which contain testing data from the OSU (Oregon State University) O.H. Hinsdale Wave Research Laboratory, from both OSU and the University of Hawaii at Manoa (UH). See zip folders provided further below in the downloads section. For experimental data of the full system, including PTO, see Phase II dataset. There are two main directories in each Phases's zip folder: "OSU_data" and "UH_data". The "OSU_data" directory contains data collected from their DAQ (data acquisition system), which includes all wave gauge observations, as well as body motions derived from their Qualisys motion tracking system. The organization of the directory follows OSU's convention. Detailed information on the instrument setup can be found under "OSU_data/docs/setup/instm_locations". The experiments conducted are documented in the "OSU_data/docs/daq_logs", which provides the trial number to the corresponding data located under "OSU_data/data" in several formats (e.g., ".mat" and ".txt"). Inside the trial directory, data is provided for each of the instruments defined in "OSU_data/docs/setup/instm_locations". The "UH_data" directory contains data collected from their DAQ. The data is stored in a ".tdms" file format. There are free plug-ins for Microsoft Excel and MathWorks MATLAB to read the ".tdms" format. Below are a few links providing methods to read in the data, but a Google search should identify alternatives sources if these no longer exist (valid as of January 2024): Excel: http://www.ni.com/example/27944/en/ MATLAB: https://www.mathworks.com/matlabcentral/fileexchange/30023-tdms-reader The Excel plugin is recommend to get a quick overview of the data. The UH data is organized by directory name, in which the sub-directories for each experiment contains a directory whose name defines the wave height and period for the experimental data within. For example, a directory name "H02_T0275" corresponds to an experiment with wave height 0.1m and a period of 2.75s. For random wave data, the gamma value is also included in the directory name. For example, a directory name "H02_T0225_G18" corresponds to an experiment with a significant wave height of 0.2m, a peak period of 2.25s, and a gamma value of 1.8, with each spectra being a TMA spectrum. For the free decay experiments, the directory name is defined by the initial angular displacement. For example, a directory name "ang05_run01" corresponds to an experiment with an initial angular displacement of 5 degrees. There is a dataset in the UH data for each corresponding experiment defined in the OSU DAQ logs. The ".tdms" data is output from the DAQ at fixed intervals. Therefore, if multiple files are contained within the folder, the data will need to be stitched together. Within the UH dataset, there are two input channels from the OSU DAQ providing a random square wave signal for time synchronization ("ENV-WHT-0010") and a high/low signal ("ENV-WHT-0012") to identify when the wave maker is active (+5V). The UH data is logged as a collection of channel outputs. Channels not in use for the OSU testing (either Phase I or Phase II) are marked "nan" below. If the sensor is disconnected, it will record noise throughout the experiment. Below are the channel definitions in terms of what they measure: GPS Time = time CYL-POS-0001 = position between flap and fixed reference CYL-LCA-0001 = force between flap and hydraulic cylinder REC-LPT-0001 = nan REC-HPT-0001 = nan REC-HPT-0002 = nan REC-HPT-0003 = nan HHT-HPT-0001 = pressure at exhaust ("head" only) REC-FQC-0001 = nan REC-FQC-0002 = nan HHT-FQC-0001 = flow at exhaust ("head" only) ENV-WHT-0001 = nan ENV-WHT-0002 = nan ENV-WHT-0003 = nan ENV-WHT-0010 = random signal from OSU DAQ ENV-WHT-0012 = high/low signal from OSU DAQ Also included is a calibration curve to convert the string pot data to flap pi...
Facebook
TwitterThis is digital research data corresponding to the manuscript, Reinhart, K.O., Vermeire, L.T. Precipitation Manipulation Experiments May Be Confounded by Water Source. J Soil Sci Plant Nutr (2023). https://doi.org/10.1007/s42729-023-01298-0 Files for a 3x2x2 factorial field experiment and water quality data used to create Table 1. Data for the experiment were used for the statistical analysis and generation of summary statistics for Figure 2. Purpose: This study aims to investigate the consequences of performing precipitation manipulation experiments with mineralized water in place of rainwater (i.e. demineralized water). Limited attention has been paid to the effects of water mineralization on plant and soil properties, even when the experiments are in a rainfed context. Methods: We conducted a 6-yr experiment with a gradient in spring rainfall (70, 100, and 130% of ambient). We tested effects of rainfall treatments on plant biomass and six soil properties and interpreted the confounding effects of dissolved solids in irrigation water. Results: Rainfall treatments affected all response variables. Sulfate was the most common dissolved solid in irrigation water and was 41 times more abundant in irrigated (i.e. 130% of ambient) than other plots. Soils of irrigated plots also had elevated iron (16.5 µg × 10 cm-2 × 60-d vs 8.9) and pH (7.0 vs 6.8). The rainfall gradient also had a nonlinear (hump-shaped) effect on plant available phosphorus (P). Plant and microbial biomasses are often limited by and positively associated with available P, suggesting the predicted positive linear relationship between plant biomass and P was confounded by additions of mineralized water. In other words, the unexpected nonlinear relationship was likely driven by components of mineralized irrigation water (i.e. calcium, iron) and/or shifts in soil pH that immobilized P. Conclusions: Our results suggest robust precipitation manipulation experiments should either capture rainwater when possible (or use demineralized water) or consider the confounding effects of mineralized water on plant and soil properties. Resources in this dataset: Resource Title: Readme file- Data dictionary File Name: README.txt Resource Description: File contains data dictionary to accompany data files for a research study. Resource Title: 3x2x2 factorial dataset.csv File Name: 3x2x2 factorial dataset.csv Resource Description: Dataset is for a 3x2x2 factorial field experiment (factors: rainfall variability, mowing seasons, mowing intensity) conducted in northern mixed-grass prairie vegetation in eastern Montana, USA. Data include activity of 5 plant available nutrients, soil pH, and plant biomass metrics. Data from 2018. Resource Title: water quality dataset.csv File Name: water quality dataset.csv Resource Description: Water properties (pH and common dissolved solids) of samples from Yellowstone River collected near Miles City, Montana. Data extracted from Rinella MJ, Muscha JM, Reinhart KO, Petersen MK (2021) Water quality for livestock in northern Great Plains rangelands. Rangeland Ecol. Manage. 75: 29-34.