50 datasets found
  1. f

    Experiment 1 means and statistics for age and baseline assessments...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Feb 20, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kilford, Emma J.; Holmes, Emily A.; James, Ella L.; Deeprose, Catherine (2013). Experiment 1 means and statistics for age and baseline assessments indicating experimental conditions did not differ. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001637152
    Explore at:
    Dataset updated
    Feb 20, 2013
    Authors
    Kilford, Emma J.; Holmes, Emily A.; James, Ella L.; Deeprose, Catherine
    Description

    Experiment 1 means and statistics for age and baseline assessments indicating experimental conditions did not differ.

  2. f

    Stimuli used in the experiment (see main text for the definition of the...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Jan 21, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Segev, Ronen; Schneidman, Elad; Tkačik, Gašper; Ghosh, Anandamohan (2014). Stimuli used in the experiment (see main text for the definition of the statistics ). [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001226171
    Explore at:
    Dataset updated
    Jan 21, 2014
    Authors
    Segev, Ronen; Schneidman, Elad; Tkačik, Gašper; Ghosh, Anandamohan
    Description

    The shorthand symbol for the stimulus starts with the C/S/K (for contrast, skew, kurtosis) and is followed by −,−−,+,++ (small magnitude and negative, large magnitude and negative, small magnitude and positive, large magnitude and positive); therefore, C+,C++,S−−,S−,S+,S++,K−−,K−,K+. Parameters in the table denoted in bold were varied in each of the three stimulus categories.

  3. Data from: Precipitation manipulation experiments may be confounded by water...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Apr 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: Precipitation manipulation experiments may be confounded by water source [Dataset]. https://catalog.data.gov/dataset/data-from-precipitation-manipulation-experiments-may-be-confounded-by-water-source-7d7bc
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    This is digital research data corresponding to the manuscript, Reinhart, K.O., Vermeire, L.T. Precipitation Manipulation Experiments May Be Confounded by Water Source. J Soil Sci Plant Nutr (2023). https://doi.org/10.1007/s42729-023-01298-0 Files for a 3x2x2 factorial field experiment and water quality data used to create Table 1. Data for the experiment were used for the statistical analysis and generation of summary statistics for Figure 2. Purpose: This study aims to investigate the consequences of performing precipitation manipulation experiments with mineralized water in place of rainwater (i.e. demineralized water). Limited attention has been paid to the effects of water mineralization on plant and soil properties, even when the experiments are in a rainfed context. Methods: We conducted a 6-yr experiment with a gradient in spring rainfall (70, 100, and 130% of ambient). We tested effects of rainfall treatments on plant biomass and six soil properties and interpreted the confounding effects of dissolved solids in irrigation water. Results: Rainfall treatments affected all response variables. Sulfate was the most common dissolved solid in irrigation water and was 41 times more abundant in irrigated (i.e. 130% of ambient) than other plots. Soils of irrigated plots also had elevated iron (16.5 µg × 10 cm-2 × 60-d vs 8.9) and pH (7.0 vs 6.8). The rainfall gradient also had a nonlinear (hump-shaped) effect on plant available phosphorus (P). Plant and microbial biomasses are often limited by and positively associated with available P, suggesting the predicted positive linear relationship between plant biomass and P was confounded by additions of mineralized water. In other words, the unexpected nonlinear relationship was likely driven by components of mineralized irrigation water (i.e. calcium, iron) and/or shifts in soil pH that immobilized P. Conclusions: Our results suggest robust precipitation manipulation experiments should either capture rainwater when possible (or use demineralized water) or consider the confounding effects of mineralized water on plant and soil properties. Resources in this dataset: Resource Title: Readme file- Data dictionary File Name: README.txt Resource Description: File contains data dictionary to accompany data files for a research study. Resource Title: 3x2x2 factorial dataset.csv File Name: 3x2x2 factorial dataset.csv Resource Description: Dataset is for a 3x2x2 factorial field experiment (factors: rainfall variability, mowing seasons, mowing intensity) conducted in northern mixed-grass prairie vegetation in eastern Montana, USA. Data include activity of 5 plant available nutrients, soil pH, and plant biomass metrics. Data from 2018. Resource Title: water quality dataset.csv File Name: water quality dataset.csv Resource Description: Water properties (pH and common dissolved solids) of samples from Yellowstone River collected near Miles City, Montana. Data extracted from Rinella MJ, Muscha JM, Reinhart KO, Petersen MK (2021) Water quality for livestock in northern Great Plains rangelands. Rangeland Ecol. Manage. 75: 29-34.

  4. S

    Data set on Task unpacking effects in time estimation: The role of future...

    • scidb.cn
    Updated Dec 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shizifu; xia bi qi; Liu Xin (2023). Data set on Task unpacking effects in time estimation: The role of future boundaries and thought focus [Dataset]. http://doi.org/10.57760/sciencedb.j00052.00202
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 1, 2023
    Dataset provided by
    Science Data Bank
    Authors
    Shizifu; xia bi qi; Liu Xin
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset is for the study of task decomposition effects in time estimation: the role of future boundaries and thought focus, and supplementary materials. Due to the previous research on the impact of task decomposition on time estimation, the role of time factors was often overlooked. For example, with the same decomposition, people subjectively set different time boundaries when facing difficult and easy tasks. Therefore, taking into account the time factor is bound to improve and integrate the research conclusions of decomposition effects. Based on this, we studied the impact of task decomposition and future boundaries on time estimation. Experiment 1 passed 2 (task decomposition/no decomposition) × Design an inter subject experiment with/without future boundaries, using the expected paradigm to measure the time estimation of participants; Experiment 2 further manipulates the time range of future boundaries based on Experiment 1, using 2 (task decomposition/non decomposition) × 3 (future boundaries of longer/shorter/medium range) inter subject experimental design, using expected paradigm to measure time estimation of subjects; On the basis of Experiment 2, Experiment 3 further verified the mechanism of the influence of the time range of future boundaries under decomposition conditions on time estimation. Through a single factor inter subject experimental design, a thinking focus scale was used to measure the thinking focus of participants under longer and shorter boundary conditions. Through the above experiments and measurements, we have obtained the following dataset. Experiment 1 Table Data Column Label Meaning: Task decomposition into grouped variables: 0 represents decomposition; 1 indicates no decomposition The future boundary is a grouping variable: 0 represents existence; 1 means it does not exist Zsco01: Standard score for estimating total task time A logarithm: The logarithmic value of the estimated time for all tasks Experiment 2 Table Data Column Label Meaning: The future boundary is a grouping variable: 7 represents shorter, 8 represents medium, and 9 represents longer The remaining data labels are the same as Experiment 1 Experiment 3 Table Data Column Label Meaning: Zplan represents the standard score for the focus plan score Zbar represents the standard score for attention barriers The future boundary is a grouping variable: 0 represents shorter, 1 represents longer

  5. Experimental Data for Question Classification

    • kaggle.com
    zip
    Updated Jan 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    JunYu (2019). Experimental Data for Question Classification [Dataset]. https://www.kaggle.com/owen1226/textsdata
    Explore at:
    zip(127653 bytes)Available download formats
    Dataset updated
    Jan 9, 2019
    Authors
    JunYu
    Description

    Context

    This data collection contains all the data used in our learning question classification experiments, which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features.

    Content

    ABBR - 'abbreviation': expression abbreviated, etc. DESC - 'description and abstract concepts': manner of an action, description of sth. etc. ENTY - 'entities': animals, colors, events, food, etc. HUM - 'human beings': a group or organization of persons, an individual, etc. LOC - 'locations': cities, countries, etc. NUM - 'numeric values': postcodes, dates, speed,temperature, etc

    Acknowledgements

    https://cogcomp.seas.upenn.edu/Data/QA/QC/ https://github.com/Tony607/Keras-Text-Transfer-Learning/blob/master/README.md

  6. m

    Semantic Similarity with Concept Senses: new Experiment

    • data.mendeley.com
    Updated Oct 24, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Taglino (2022). Semantic Similarity with Concept Senses: new Experiment [Dataset]. http://doi.org/10.17632/v2bwh7z8kj.1
    Explore at:
    Dataset updated
    Oct 24, 2022
    Authors
    Francesco Taglino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset represents the results of the experimentation of a method for evaluating semantic similarity between concepts in a taxonomy. The method is based on the information-theoretic approach and allows senses of concepts in a given context to be considered. Relevance of senses is calculated in terms of semantic relatedness with the compared concepts. In a previous work [9], the adopted semantic relatedness method was the one described in [10], while in this work we also adopted the ones described in [11], [12], [13], [14], [15], and [16].

    We applied our proposal by extending 7 methods for computing semantic similarity in a taxonomy, selected from the literature. The methods considered in the experiment are referred to as R[2], W&P[3], L[4], J&C[5], P&S[6], A[7], and A&M[8]

    The experiment was run on the well-known Miller and Charles benchmark dataset [1] for assessing semantic similarity.

    The results are organized in seven folders, each with the results related to one of the above semantic relatedness methods. In each folder there is a set of files, each referring to one pair of the Miller and Charles dataset. In fact, for each pair of concepts, all the 28 pairs are considered as possible different contexts.

    REFERENCES [1] Miller G.A., Charles W.G. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1). [2] Resnik P. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Int. Joint Conf. on Artificial Intelligence, Montreal. [3] Wu Z., Palmer M. 1994. Verb semantics and lexical selection. 32nd Annual Meeting of the Associations for Computational Linguistics. [4] Lin D. 1998. An Information-Theoretic Definition of Similarity. Int. Conf. on Machine Learning. [5] Jiang J.J., Conrath D.W. 1997. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. Inter. Conf. Research on Computational Linguistics. [6] Pirrò G. 2009. A Semantic Similarity Metric Combining Features and Intrinsic Information Content. Data Knowl. Eng, 68(11). [7] Adhikari A., Dutta B., Dutta A., Mondal D., Singh S. 2018. An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology. J. Assoc. Inf. Sci. Technol. 69(8). [8] Adhikari A., Singh S., Mondal D., Dutta B., Dutta A. 2016. A Novel Information Theoretic Framework for Finding Semantic Similarity in WordNet. CoRR, arXiv:1607.05422, abs/1607.05422. [9] Formica A., Taglino F. 2021. An Enriched Information-Theoretic Definition of Semantic Similarity in a Taxonomy. IEEE Access, vol. 9. [10] Information Content-based approach [Schuhmacher and Ponzetto, 2014]. [11] Linked Data Semantic Distance (LDSD) [Passant, 2010]. [12] Wikipedia Link-based Measure (WLM ) [Witten and Milne, 2008]; [13] Linked Open Data Description Overlap-based approach (LODDO) [Zhou et al. 2012] [14] Exclusivity-based [Hulpuş et al 2015] [15] ASRMP [El Vaigh et al. 2020] [16] LDSDGN [Piao and Breslin, 2016]

  7. Dataset for: Experiment for validation of fluid-structure interaction models...

    • wiley.figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Hessenthaler; N Gaddum; Ondrej Holub; Ralph Sinkus; Oliver Röhrle; David Nordsletten (2023). Dataset for: Experiment for validation of fluid-structure interaction models and algorithms [Dataset]. http://doi.org/10.6084/m9.figshare.4141836.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Andreas Hessenthaler; N Gaddum; Ondrej Holub; Ralph Sinkus; Oliver Röhrle; David Nordsletten
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using CAD tools. The experimental design aimed at providing a straight-forward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by employing magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion.

  8. d

    Data from: Multi-task Deep Learning for Water Temperature and Streamflow...

    • catalog.data.gov
    Updated Nov 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Multi-task Deep Learning for Water Temperature and Streamflow Prediction (ver. 1.1, June 2022) [Dataset]. https://catalog.data.gov/dataset/multi-task-deep-learning-for-water-temperature-and-streamflow-prediction-ver-1-1-june-2022
    Explore at:
    Dataset updated
    Nov 11, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This item contains data and code used in experiments that produced the results for Sadler et. al (2022) (see below for full reference). We ran five experiments for the analysis, Experiment A, Experiment B, Experiment C, Experiment D, and Experiment AuxIn. Experiment A tested multi-task learning for predicting streamflow with 25 years of training data and using a different model for each of 101 sites. Experiment B tested multi-task learning for predicting streamflow with 25 years of training data and using a single model for all 101 sites. Experiment C tested multi-task learning for predicting streamflow with just 2 years of training data. Experiment D tested multi-task learning for predicting water temperature with over 25 years of training data. Experiment AuxIn used water temperature as an input variable for predicting streamflow. These experiments and their results are described in detail in the WRR paper. Data from a total of 101 sites across the US was used for the experiments. The model input data and streamflow data were from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) dataset (Newman et. al 2014, Addor et. al 2017). The water temperature data were gathered from the National Water Information System (NWIS) (U.S. Geological Survey, 2016). The contents of this item are broken into 13 files or groups of files aggregated into zip files:

    1. input_data_processing.zip: A zip file containing the scripts used to collate the observations, input weather drivers, and catchment attributes for the multi-task modeling experiments
    2. flow_observations.zip: A zip file containing collated daily streamflow data for the sites used in multi-task modeling experiments. The streamflow data were originally accessed from the CAMELs dataset. The data are stored in csv and Zarr formats.
    3. temperature_observations.zip: A zip file containing collated daily water temperature data for the sites used in multi-task modeling experiments. The data were originally accessed via NWIS. The data are stored in csv and Zarr formats.
    4. temperature_sites.geojson: Geojson file of the locations of the water temperature and streamflow sites used in the analysis.
    5. model_drivers.zip: A zip file containing the daily input weather driver data for the multi-task deep learning models. These data are from the Daymet drivers and were collated from the CAMELS dataset. The data are stored in csv and Zarr formats.
    6. catchment_attrs.csv: Catchment attributes collatted from the CAMELS dataset. These data are used for the Random Forest modeling. For full metadata regarding these data see CAMELS dataset.
    7. experiment_workflow_files.zip: A zip file containing workflow definitions used to run multi-task deep learning experiments. These are Snakemake workflows. To run a given experiment, one would run (for experiment A) 'snakemake -s expA_Snakefile --configfile expA_config.yml'
    8. river-dl-paper_v0.zip: A zip file containing python code used to run multi-task deep learning experiments. This code was called by the Snakemake workflows contained in 'experiment_workflow_files.zip'.
    9. random_forest_scripts.zip: A zip file containing Python code and a Python Jupyter Notebook used to prepare data for, train, and visualize feature importance of a Random Forest model.
    10. plotting_code.zip: A zip file containing python code and Snakemake workflow used to produce figures showing the results of multi-task deep learning experiments.
    11. results.zip: A zip file containing results of multi-task deep learning experiments. The results are stored in csv and netcdf formats. The netcdf files were used by the plotting libraries in 'plotting_code.zip'. These files are for five experiments, 'A', 'B', 'C', 'D', and 'AuxIn'. These experiment names are shown in the file name.
    12. sample_scripts.zip: A zip file containing scripts for creating sample output to demonstrate how the modeling workflow was executed.
    13. sample_output.zip: A zip file containing sample output data. Similar files are created by running the sample scripts provided.
    A. Newman; K. Sampson; M. P. Clark; A. Bock; R. J. Viger; D. Blodgett, 2014. A large-sample watershed-scale hydrometeorological dataset for the contiguous USA. Boulder, CO: UCAR/NCAR. https://dx.doi.org/10.5065/D6MW2F4D

    N. Addor, A. Newman, M. Mizukami, and M. P. Clark, 2017. Catchment attributes for large-sample studies. Boulder, CO: UCAR/NCAR. https://doi.org/10.5065/D6G73C3Q

    Sadler, J. M., Appling, A. P., Read, J. S., Oliver, S. K., Jia, X., Zwart, J. A., & Kumar, V. (2022). Multi-Task Deep Learning of Daily Streamflow and Water Temperature. Water Resources Research, 58(4), e2021WR030138. https://doi.org/10.1029/2021WR030138

    U.S. Geological Survey, 2016, National Water Information System data available on the World Wide Web (USGS Water Data for the Nation), accessed Dec. 2020.

  9. Intelligent Building Agents Project Data

    • data.nist.gov
    • catalog.data.gov
    Updated Jun 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amanda Pertzborn (2022). Intelligent Building Agents Project Data [Dataset]. http://doi.org/10.18434/mds2-2751
    Explore at:
    Dataset updated
    Jun 14, 2022
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Authors
    Amanda Pertzborn
    License

    https://www.nist.gov/open/licensehttps://www.nist.gov/open/license

    Description

    The Intelligent Building Agents (IBA) project is part of the Embedded Intelligence in Buildings Program in the Engineering Laboratory at the National Institute of Standards and Technology (NIST). A key part of the IBA Project is the IBA Laboratory (IBAL), a unique facility consisting of a mixed system of off the shelf equipment, including chillers and air handling units, controlled by a data acquisition system and capable of supporting building system optimization research under realistic and reproducible operating conditions. The database contains the values of approximately 300 sensors/actuators in the IBAL, including both sensor measurements and control actions, as well as approximately 850 process data, which are typically related to control settings and decisions. Each of the sensors/actuators has associated metadata. The metadata, sensors/actuators, and process data are defined on the "metadata", "sensors", and "parameters" tabs in the definitions file. Data are collected every 10 s. The database contains two dashboards: 1) Experiments - select data from individual experiments and 2) Measurements - select individual sensor/actuator and parameter data. The Experiments Dashboard contains three sections. The "Experiment Data Plot" shows plots of the sensor/actuator data selected in the second section, "Experiment/Metadata". There are plots of both scaled and raw data (see the meta data file for the conversion from raw to scaled data). Underneath the plots is a "Download CSV" button; select that button and a csv file of the data in the plot is automatically generated. In "Experiment/Metadata", first select an "Experiment" from the options in the table on the left. A specific experiment or type of experiment can be found by entering terms in the search box. For example, searching for the word "Charge" will bring up experiments in which the ice thermal storage tank is charged. The table of experiments also includes the duration of the experiment in minutes. Once an experiment is selected, specific sensor/actuator data points can be selected from the "Measurements" table on the right. These data can be filtered by subsystem (e.g., primary loop, secondary loop, Chiller1) and/or measurement type (e.g., pressure, flow, temperature). These data will then be shown in the plots at the top. The final section, "Process", contains the process data, which are shown by the subsystem. These data are not shown in the plots but can be downloaded by selecting the "Download CSV" button in the "Process" section. The Measurements Dashboard contains three sections. The "Date Range" section is used to select the time range of the data. The "All Measurements" section is used to select specific sensor/actuator data. As in the Experiments Dashboard, these data can be filtered by subsystem and/or measurement type. The scaled and raw values of the selected data are then plotted in the "Historical Data Plot" section. The "Download CSV" button underneath the plots will automatically download the selected data.

  10. d

    Data from: Utah FORGE: Slide-Hold-Slide Experiments on Gneiss at Increased...

    • datasets.ai
    • gdr.openei.org
    • +3more
    53, 75
    Updated Oct 13, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Energy (2023). Utah FORGE: Slide-Hold-Slide Experiments on Gneiss at Increased Temperature [Dataset]. https://datasets.ai/datasets/utah-forge-slide-hold-slide-experiments-on-gneiss-at-increased-temperature
    Explore at:
    75, 53Available download formats
    Dataset updated
    Oct 13, 2023
    Dataset authored and provided by
    Department of Energy
    Description

    Included are data from triaxial, single-inclined-fracture friction experiments. The experiments were performed with slide-hold-slide protocol on Utah FORGE gneiss at increased temperature. With a ~10 MPa normal stress, temperatures vary between experiments from room temperature up to 163 Celsius. Hold times vary during experiment from ~10^1 to ~10^5 seconds. Measured are the frictional response upon reactivation after a hold period, active acoustic data (P-wave velocity and amplitude) and passive acoustic data (acoustic emission occurrence and amplitude).

    There are two types of datafiles: (1) Datafiles containing the friction data, including the temperature and the active acoustic data measured during the experiment (AEXX_Gneiss_Vp_mixref4). The underscore _Vp means that it includes the Vp or P-wave velocity data, with _mixref meaning that we use a mixed reference point for calculating the P-wave velocity. And (2) the datafiles containing the passive acoustics data, a catalog of the acoustic emissions (AE's) measured during the experiment (AEcatalog_AEXX_runX), where AEXX matches the experiment number and runX denotes which part of the experiment the data was collected, matching the times where active acoustic data was collected. AE catalogs are split in two parts when the file size exceeds 1 GB to aid download/opening times.

  11. u

    JRA-55AMIP: Monthly Means and Variances Including Diurnal Statistics

    • data.ucar.edu
    • api.gdex.ucar.edu
    • +3more
    grib
    Updated Oct 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Japan Meteorological Agency, Japan (2025). JRA-55AMIP: Monthly Means and Variances Including Diurnal Statistics [Dataset]. http://doi.org/10.5065/D6T72FHN
    Explore at:
    gribAvailable download formats
    Dataset updated
    Oct 9, 2025
    Dataset provided by
    NSF National Center for Atmospheric Research
    Authors
    Japan Meteorological Agency, Japan
    Area covered
    Earth
    Description

    As a subset of the Japanese 55-year Reanalysis (JRA-55) project, an experiment using the global atmospheric model of the JRA-55 was conducted by the Meteorological Research Institute of the Japan Meteorological Agency. The experiment, named the JRA-55AMIP, has been carried out by prescribing the same boundary conditions and radiative forcing of JRA-55, including the historical observed sea surface temperature, sea ice concentration, greenhouse gases, etc., with no use of atmospheric observational data. This project is intended to assess systematic errors of the model.

  12. Leveraging Ontologies and Reasoning for FAIR Data in ESRF Experiments

    • meta4cat.fokus.fraunhofer.de
    • meta4ds.fokus.fraunhofer.de
    • +1more
    pdf, unknown
    Updated Jun 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). Leveraging Ontologies and Reasoning for FAIR Data in ESRF Experiments [Dataset]. https://meta4cat.fokus.fraunhofer.de/datasets/oai-zenodo-org-15609374?locale=en
    Explore at:
    pdf(2093825), unknownAvailable download formats
    Dataset updated
    Jun 12, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We demonstrate that semantic modeling with ontologies provides a robust and enduring approach to achieving FAIR data in our experimental environment. By endowing data with self‑describing semantics through ontological definitions and inference, we enable them to ‘speak’ for themselves. Building on PaNET, we define techniques in ESRFET by their characteristic building blocks. The outcome is a standards‑based framework (RDF, OWL, SWRL, SPARQL, SHACL) that encodes experimental techniques’ semantics and underpins a broader facility ontology. Our approach illustrates that by using differential definitions, semantic enrichment through linking to multiple ontologies, and documented semantic negotiation, we standardize experimental techniques' descriptions and annotations—ensuring enhanced discoverability, reproducibility, and integration within the FAIR data ecosystem. This talk was held in the course of the DAPHNE4NFDI TA1 Data for science lecture series on April, 29 2025

  13. Hawaii Wave Surge Energy Converter (HAWSEC) OSU O.H. Hinsdale Basin

    • osti.gov
    • mhkdr.openei.org
    Updated Jun 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marine and Hydrokinetic Data Repository (MHKDR) (2022). Hawaii Wave Surge Energy Converter (HAWSEC) OSU O.H. Hinsdale Basin [Dataset]. http://doi.org/10.15473/2315014
    Explore at:
    Dataset updated
    Jun 22, 2022
    Dataset provided by
    United States Department of Energyhttp://energy.gov/
    University of Hawaii at Manoa
    Marine and Hydrokinetic Data Repository (MHKDR)
    Area covered
    Hawaii
    Description

    The following information and metadata applies to both the Phase I (Hydrodynamics) and Phase II (Full System Power Take-Off) zip folders which contain testing data from the OSU (Oregon State University) O.H. Hinsdale Wave Research Laboratory, from both OSU and the University of Hawaii at Manoa (UH). See zip folders provided further below in the downloads section. For experimental data of the full system, including PTO, see Phase II dataset. There are two main directories in each Phases's zip folder: "OSU_data" and "UH_data". The "OSU_data" directory contains data collected from their DAQ (data acquisition system), which includes all wave gauge observations, as well as body motions derived from their Qualisys motion tracking system. The organization of the directory follows OSU's convention. Detailed information on the instrument setup can be found under "OSU_data/docs/setup/instm_locations". The experiments conducted are documented in the "OSU_data/docs/daq_logs", which provides the trial number to the corresponding data located under "OSU_data/data" in several formats (e.g., ".mat" and ".txt"). Inside the trial directory, data is provided for each of the instruments defined in "OSU_data/docs/setup/instm_locations". The "UH_data" directory contains data collected from their DAQ. The data is stored in a ".tdms" file format. There are free plug-ins for Microsoft Excel and MathWorks MATLAB to read the ".tdms" format. Below are a few links providing methods to read in the data, but a Google search should identify alternatives sources if these no longer exist (valid as of January 2024): Excel: http://www.ni.com/example/27944/en/ MATLAB: https://www.mathworks.com/matlabcentral/fileexchange/30023-tdms-reader The Excel plugin is recommend to get a quick overview of the data. The UH data is organized by directory name, in which the sub-directories for each experiment contains a directory whose name defines the wave height and period for the experimental data within. For example, a directory name "H02_T0275" corresponds to an experiment with wave height 0.1m and a period of 2.75s. For random wave data, the gamma value is also included in the directory name. For example, a directory name "H02_T0225_G18" corresponds to an experiment with a significant wave height of 0.2m, a peak period of 2.25s, and a gamma value of 1.8, with each spectra being a TMA spectrum. For the free decay experiments, the directory name is defined by the initial angular displacement. For example, a directory name "ang05_run01" corresponds to an experiment with an initial angular displacement of 5 degrees. There is a dataset in the UH data for each corresponding experiment defined in the OSU DAQ logs. The ".tdms" data is output from the DAQ at fixed intervals. Therefore, if multiple files are contained within the folder, the data will need to be stitched together. Within the UH dataset, there are two input channels from the OSU DAQ providing a random square wave signal for time synchronization ("ENV-WHT-0010") and a high/low signal ("ENV-WHT-0012") to identify when the wave maker is active (+5V). The UH data is logged as a collection of channel outputs. Channels not in use for the OSU testing (either Phase I or Phase II) are marked "nan" below. If the sensor is disconnected, it will record noise throughout the experiment. Below are the channel definitions in terms of what they measure: GPS Time = time CYL-POS-0001 = position between flap and fixed reference CYL-LCA-0001 = force between flap and hydraulic cylinder REC-LPT-0001 = nan REC-HPT-0001 = nan REC-HPT-0002 = nan REC-HPT-0003 = nan HHT-HPT-0001 = pressure at exhaust ("head" only) REC-FQC-0001 = nan REC-FQC-0002 = nan HHT-FQC-0001 = flow at exhaust ("head" only) ENV-WHT-0001 = nan ENV-WHT-0002 = nan ENV-WHT-0003 = nan ENV-WHT-0010 = random signal from OSU DAQ ENV-WHT-0012 = high/low signal from OSU DAQ Also included is a calibration curve to convert the string pot data to flap pi...

  14. TCTE Level 3 Total Solar Irradiance Daily Means V004 (TCTE3TSID) at GES DISC...

    • data.nasa.gov
    • s.cnmilf.com
    • +2more
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). TCTE Level 3 Total Solar Irradiance Daily Means V004 (TCTE3TSID) at GES DISC [Dataset]. https://data.nasa.gov/dataset/tcte-level-3-total-solar-irradiance-daily-means-v004-tcte3tsid-at-ges-disc-09f94
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    TCTE3TSID Version 004 is the final version of this data product, and supersedes all previous versions.The Total Solar Irradiance (TSI) Calibration Transfer Experiment (TCTE) data set TCTE3TSID contains daily averaged total solar irradiance (a.k.a solar constant) data collected by the Total Irradiance Monitor (TIM) instrument covering the full wavelength spectrum. The data are normalized to one astronomical unit (1 AU).The TCTE/TIM instrument measures the Total Solar Irradiance (TSI), monitoring changes in incident sunlight to the Earth's atmosphere using an ambient temperature active cavity radiometer to a designed absolute accuracy of 350 parts per million (ppm, 1 ppm=0.0001% at 1-sigma), and a precision and long-term relative accuracy of 10 ppm per year. Due to the small size of these data and to maximize ease of use to end-users, each delivered TSI product contains science results for the entire mission in an ASCII column formatted file.Early in the mission, between Dec 2013 and May 2014, TCTE acquired daily measurements to establish good overlap with the SORCE TIM. From May 2014 to Dec 2014, the TCTE measurements were reduced to weekly, which greatly subsample the true solar variability, and thus have little value for solar research. Beginning in Jan 2015, daily obervations were resumed. The mission ended June 30, 2019.

  15. Definition of our abstract data model.

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vanessa Cedeno-Mieles; Zhihao Hu; Yihui Ren; Xinwei Deng; Noshir Contractor; Saliya Ekanayake; Joshua M. Epstein; Brian J. Goode; Gizem Korkmaz; Chris J. Kuhlman; Dustin Machi; Michael Macy; Madhav V. Marathe; Naren Ramakrishnan; Parang Saraf; Nathan Self (2023). Definition of our abstract data model. [Dataset]. http://doi.org/10.1371/journal.pone.0242453.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Vanessa Cedeno-Mieles; Zhihao Hu; Yihui Ren; Xinwei Deng; Noshir Contractor; Saliya Ekanayake; Joshua M. Epstein; Brian J. Goode; Gizem Korkmaz; Chris J. Kuhlman; Dustin Machi; Michael Macy; Madhav V. Marathe; Naren Ramakrishnan; Parang Saraf; Nathan Self
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Definition of our abstract data model.

  16. d

    Data from: Alfalfa flux footprint experiment 2021

    • datasets.ai
    • agdatacommons.nal.usda.gov
    • +1more
    57, 8
    Updated Mar 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Agriculture (2024). Alfalfa flux footprint experiment 2021 [Dataset]. https://datasets.ai/datasets/alfalfa-flux-footprint-experiment-2021-a16a1
    Explore at:
    57, 8Available download formats
    Dataset updated
    Mar 30, 2024
    Dataset authored and provided by
    Department of Agriculture
    Description

    Four eddy-covariance (EC) sensors were deployed at two heights upwind and within alfalfa plot trials at San Joaquin Valley Ag Science Center. The purpose of the experiment was to evaluate the robustness of flux footprint models under different atmospheric stability conditions. At each of the two locations, an EC sensor was mounted at an unconventionally low height (~1 meter) and a second at a more typical height (~2.5 m). Supplementary sensors were co-located to measure net radiation, soil heat flux, and other parameters necessary to evaluate closure of the surface energy budget. The southeast station was located at the downwind edge of a 2 acre plot trial of irrigated alfalfa, arranged in small blocks. The upwind fetch (with respect to predominant day time wind direction) included less than 100 meters of semi-homogeneous conditions. Soil sensors were duplicated across the alfalfa blocks and inter-block alleys which were irrigated but not planted. The northwest station was located approximately 25 meters upwind of the irrigated alfalfa plot trials in fallow, non-irrigated bare field. Raw 10 Hz infrared gas analyzer and sonic anemometer data, and 30 minute averaged data from other sensors are provided.

    Resources in this dataset:

    1. Title: Data dictionary for SEB files from southeast station (in zipped folder) Filename: ALF2021_SEBSE_header.csv Description: Contains variable names, description of sensors, units, and required metadata for each variable.

    2. Title: Data dictionary for SEB files from northwest station (zipped folder) Filename: ALF2021_SEBNW_header.csv Description: Contains variable names, description of sensors, units, and required metadata for each variable.

    3. Title: Data dictionary for EC files from SE station (zipped folder) Filename: ALF2021_ECSE_header.csv Description: Contains variable names, description of sensors, units, and required metadata for each variable.

    4. Title: Data dictionary for EC files from NW station (zipped folder) Filename: ALF2021_ECNW_header.csv Description: Contains variable names, description of sensors, units, and required metadata for each variable.

    5. Title: SJVASC Alfalfa 2021- NW station Filename: EC2_alfNW.zip

    6. Title: SJVASC Alfalfa 2021- SE station Filename: EC2_alfSE.zip


      Resources in this dataset:

      • Resource Title: SJVASC Alfalfa 2021- NW station.

        File Name: EC2_alfNW.zip


      • Resource Title: SJVASC Alfalfa 2021- SE station.

        File Name: EC2_alfSE.zip


      • Resource Title: Data dictionary for EC files from NW station (zipped folder).

        File Name: ALF2021_ECNW_header.csv


      • Resource Title: Data dictionary for EC files from SE station (zipped folder).

        File Name: ALF2021_ECSE_header.csv


      • Resource Title: Data dictionary for SEB files from northwest station (zipped folder).

        File Name: ALF2021_SEBNW_header.csv


      • Resource Title: Data dictionary for SEB files from southeast station (in zipped folder).

        File Name: ALF2021_SEBSE_header.csv

  17. e

    Data from: Defining, Comparing, and Improving iTRAQ Quantification in Mass...

    • ebi.ac.uk
    Updated Jul 31, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Henrik J Johansson (2017). Defining, Comparing, and Improving iTRAQ Quantification in Mass Spectrometry Proteomics Data [Dataset]. https://www.ebi.ac.uk/pride/archive/projects/PXD000418
    Explore at:
    Dataset updated
    Jul 31, 2017
    Authors
    Henrik J Johansson
    Variables measured
    Proteomics
    Description

    The purpose of this study was to generate a basis for the decision of what protein quantities are reliable and find a way for accurate and precise protein quantification. To investigate this we have used thousands of peptide measurements to estimate variance and bias for quantification by iTRAQ (isobaric tags for relative and absolute quantification) mass spectrometry in complex human samples. A549 cell lysate was mixed in the proportions 2:2:1:1:2:2:1:1, fractionated by high resolution isoelectric focusing and liquid chromatography and analyzed by three mass spectrometry platforms; LTQ Orbitrap Velos, 4800 MALDI-TOF/TOF and 6530 Q-TOF. We have investigated how variance and bias in the iTRAQ reporter ions data are affected by common experimental variables such as sample amount, sample fractionation, fragmentation energy, and instrument platform. Based on this, we have suggested a concept for experimental design and a methodology for protein quantification. By using duplicate samples in each run, each experiment is validated based on its internal experimental variation. The duplicates are used for calculating peptide weights, unique to the experiment, which is used in the protein quantification. By weighting the peptides depending on reporter ion intensity, we can decrease the relative error in quantification at the protein level and assign a total weight to each protein that reflects the protein quantitation confidence. We also demonstrate the usability of this methodology in a cancer cell line experiment as well as in a clinical data set of lung cancer tissue samples. In conclusion, we have in this study developed a methodology for improved protein quantification in shotgun proteomics and introduced a way to assess quantification for proteins with few peptides. The experimental design and developed algorithms decreased the relative protein quantification error in the analysis of complex biological samples. Data analysis: LTQ Orbitrap Velos Proteome discoverer 1.1 with Mascot 2.2 (Matrix Science) was used for protein identification. Precursor mass tolerance was set to 10 ppm and for fragments 0.8 Da and 0.015 Da were used for detection in the linear iontrap and the orbitrap, respectively. Oxidized methionine was set as dynamic modification and carbamidomethylation, N-terminal 8plex iTRAQ, and lysyl 8plex iTRAQ as fixed modifications. 4800 MALDI TOF/TOF Peptide identification from the Maldi-TOF/TOF data was carried out using the Paragon algorithm in the ProteinPilot 2.0 software package (Applied Biosystems). Default settings for a 4800 instrument were used (i.e. no manual settings for mass tolerance was given). The following parameters were selected in the analysis method: iTRAQ 8plex peptide labeled as sample type, IAA as alkylating agent of cysteine, trypsin as digesting enzyme, 4800 as instrument, gel based ID and Urea denaturation as special factors, biological modifications as ID focus, and thorough ID as search effort. 6530 QTOF Peptide identification from the QTOF data was carried out using the Spectrum Mill Protein Identification software (Agilent). Data was extracted between MH+ 600 and 4000 Da (Agilent’s definition). Trypsin was used as digesting enzyme, and parent and daughter ion tolerance was set to 25 and 50 ppm, respectively. IAA for cysteine and iTRAQ partial-mix (N-term, K) were set as fixed modifications while oxidized methionine was set as variable modification. Database and peptide cut-off for all searches Searches were performed against the IPI database (build 3.64) limited to human sequences allowing 2 missed cleavages. False discovery rate (FDR) was estimated by searching the data against a database consisting of both forward and reversed sequences and set to < 1 % at the protein level using MAYU. Peptides corresponding to a <1% protein FDR rate was used in the calculations. Peptide and protein identification using Mascot for comparison between instruments Peptide identifications were performed using Mascot Daemon 2.3.2 with Mascot 2.4 for fractions 32 to 36 from IPG-IEF with 400 ug loaded peptides. Carbamidomethylation (CAM) for cysteine was set as fixed modification, oxidized methionine as variable modification and iTRAQ 8plex was set as quantification for all searches. MALDI-TOF/TOF search settings: Parent and daughter ion tolerance was set to 150 ppm and 0.2 Da, respectively. LTQ Orbitrap search settings: Precursor mass tolerance was set to 10 ppm and for fragments 0.8 Da and 0.015 Da were used for data generated in the linear ion trap and the orbitrap, respectively. QTOF search settings: Parent and daughter ion tolerance was set to 25 and 50 ppm, respectively.

  18. Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of...

    • zenodo.org
    • data.niaid.nih.gov
    Updated Oct 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nigel Gebodh; Nigel Gebodh (2025). Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation-Exp1-Data Downsampled [Dataset]. http://doi.org/10.5281/zenodo.8401160
    Explore at:
    Dataset updated
    Oct 1, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Nigel Gebodh; Nigel Gebodh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GX Dataset downsampled - Experiment 1

    The GX Dataset is a dataset of combined tES, EEG, physiological, and behavioral signals from human subjects.
    Here the GX Dataset for Experiment 1 is downsampled to 1 kHz and saved in .MAT format which can be used in both MATLAB and Python.

    Publication

    A full data descriptor is published in Nature Scientific Data. Please cite this work as:

    Gebodh, N., Esmaeilpour, Z., Datta, A. et al. Dataset of concurrent EEG, ECG, and behavior with multiple doses of transcranial electrical stimulation. Sci Data 8, 274 (2021). https://doi.org/10.1038/s41597-021-01046-y

    Descriptions

    A dataset combining high-density electroencephalography (EEG) with physiological and continuous behavioral metrics during transcranial electrical stimulation (tES). Data includes within subject application of nine High-Definition tES (HD-tES) types targeted three brain regions (frontal, motor, parietal) with three waveforms (DC, 5Hz, 30Hz), with more than 783 total stimulation trials over 62 sessions with EEG, physiological (ECG, EOG), and continuous behavioral vigilance/alertness metrics.

    Acknowledgments

    Portions of this study were funded by X (formerly Google X), the Moonshot Factory. The funding source had no influence on study conduction or result evaluation. MB is further supported by grants from the National Institutes of Health: R01NS101362, R01NS095123, R01NS112996, R01MH111896, R01MH109289, and (to NG) NIH-G-RISE T32GM136499.

    Extras

    Back to Full GX Dataset : https://doi.org/10.5281/zenodo.4456079

    For downsampled data (1 kHz ) please see (in .mat format):

    Code used to import, process, and plot this dataset can be found here:

    Additional figures for this project have been shared on Figshare. Trial-wise figures can be found here:

    The full dataset is also provided in BIDS format here:

    Data License
    Creative Common 4.0 with attribution (CC BY 4.0)

    NOTE

    Please email ngebodh01@citymail.cuny.edu with any questions.

    Follow @NigelGebodh for latest updates.


    Updates

    • Version 2
      • Stimulation trigger labels now adjusted. Previous labels were missmatched for Experiment 1's data.

  19. IntroDS

    • kaggle.com
    zip
    Updated Sep 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dayche (2023). IntroDS [Dataset]. https://www.kaggle.com/datasets/rouzbeh/introds
    Explore at:
    zip(2564 bytes)Available download formats
    Dataset updated
    Sep 7, 2023
    Authors
    Dayche
    Description

    Dataset for Beginners to start Data Science process. The subject of data is about simple clinical data for problem definition and solving. range of data science tasks such as classification, clustering, EDA and statistical analysis are using with dataset.

    columns in data set are present: Age: Numerical (Age of patient) Sex: Binary (Gender of patient) BP: Nominal (Blood Pressure of patient with values: Low, Normal and High) Cholesterol: Nominal (Cholesterol of patient with values: Normal and High) Na: Numerical (Sodium level of patient experiment) K: Numerical (Potassium level of patient experiment) Drug: Nominal (Type of Drug that prescribed with doctor, with values: A, B, C, X and Y)

  20. EEG and motion capture data set for a full-body/joystick rotation task

    • openneuro.org
    Updated Feb 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K. Gramann; F.U. Hohlefeld; L. Gehrke; M Klug (2023). EEG and motion capture data set for a full-body/joystick rotation task [Dataset]. http://doi.org/10.18112/openneuro.ds004460.v1.0.0
    Explore at:
    Dataset updated
    Feb 2, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    K. Gramann; F.U. Hohlefeld; L. Gehrke; M Klug
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    This is the "Spot roration" dataset. It contains EEG and motion data collected from 20 subjects collected at the Berlin Mobile Brain-Body Imaging Lab, while they rotated their heading in physical space or on flat screen using a joystick. Detailed description of the paradigm can be found in the following reference:

    Gramann.K, Hohlefeld, F. U., Gehrke, L., and Klug, M. "Human cortical dynamics during full-body heading changes". Scientific Reports 11, 18186 (2021). https://doi.org/10.1038/s41598-021-97749-8

    Citing this dataset

    Please cite as follows:

    Gramann, K., Hohlefeld, F.U., Gehrke, L. et al. Human cortical dynamics during full-body heading changes. Sci Rep 11, 18186 (2021). https://doi.org/10.1038/s41598-021-97749-8 For more information, see the dataset_description.json file.

    License

    This motion_spotrotation dataset is made available under the CC BY-NC 4.0.

    A human readable information can be found at:

    https://creativecommons.org/licenses/by-nc/4.0/

    Format

    The dataset is formatted according to the Brain Imaging Data Structure. See the dataset_description.json file for the specific version used.

    Generally, you can find data in the .tsv files and descriptions in the accompanying .json files.

    An important BIDS definition to consider is the "Inheritance Principle", which is described in the BIDS specification under the following link:

    https://bids-specification.rtfd.io/en/stable/02-common-principles.html#the-inheritance-principle

    The section states that:

    Any metadata file (such as .json, .bvec or .tsv) may be defined at any directory level, but no more than one applicable file may be defined at a given level [...] The values from the top level are inherited by all lower levels unless they are overridden by a file at the lower level.

    Details about the experiment

    For a detailed description of the task, see Gramann et al. (2021). What follows is a brief summary.

    Data were collected from 20 healthy adults (11 females) with a mean age of 30.25 years (SD = 7.68, ranging from ages 20 to 46) who received 10€/h or course credit for compensation. All participants reported normal or corrected to normal vision and no history of neurological disease. Eighteen participants reported being right-handed (two left-handed).

    To control for the effects of different reference frame proclivities on neural dynamics, the online version of the spatial reference frame proclivity test (RFPT44, 45) was administered prior to the experiment. Participants had to consistently use an ego- or allocentric reference frame in at least 80% of their responses. Of the 20 participants, nine preferentially used an egocentric reference frame, nine used an allocentric reference frame, and two used a mixed strategy. One participant (egocentric reference frame) dropped out of the experiment after the first block due to motion sickness and was removed from further data analyses. The reported results are based on the remaining 19 participants. The experimental procedures were approved by the local ethics committee (Technische Universität Berlin, Germany) and the research was performed in accordance with the ethics guidelines. The study was conducted in accordance to the Declaration of Helsinki and all participants signed a written informed consent.

    Participants performed a spatial orientation task in a sparse virtual environment (WorldViz Vizard, Santa Barbara, USA) consisting of an infinite floor granulated in green and black. The experiment was self-paced and participants advanced the experiment by starting and ending each trial with a button press using the index finger of the dominant hand. A trial started with the onset of a red pole, which participants had to face and align with. Once the button was pressed the pole disappeared and was immediately replaced by a red sphere floating at eye level. The sphere automatically started to move around the participant along a circular trajectory at a fixed distance (30 m) with one of two different velocity profiles. Participants were asked to rotate on the spot and to follow the sphere, keeping it in the center of their visual field (outward rotation). The sphere stopped unpredictably at varying eccentricity between 30° and 150° and turned blue, which indicated that participants had to rotate back to the initial heading (backward rotation). When participants had reproduced their estimated initial heading, they confirmed their heading with a button press and the red pole reappeared for reorientation.

    The participants completed the experimental task twice, using (i) a traditional desktop 2D setup (visual flow controlled through joystick movement; “joyR”), and (ii) equipped with a MoBI setup (visual flow controlled through active physical rotation with the whole body; “physR”). The condition order was balanced across participants. To ensure the comparability of both rotation conditions, participants carried the full motion capture system at all times. In the joyR condition participants stood in the dimly lit experimental hall in front of a standard TV monitor (1.5 m viewing distance, HD resolution, 60 Hz refresh rate, 40″ diagonal size) and were instructed to move as little as possible. They followed the sphere by tilting the joystick and were thus only able to use visual flow information to complete the task. In the physical rotation condition participants were situated in a 3D virtual reality environment using a head mounted display (HTC Vive; 2 × 1080 × 1200 resolution, 90 Hz refresh rate, 110° field of view). Participants’ movements were unconstrained, i.e., in order to follow the sphere they physically rotated on the spot, thus enabling them to use motor and kinesthetic information (i.e., vestibular input and proprioception) in addition to the visual flow for completing the task. If participants diverged from the center position as determined through motion capture of the head position, the task automatically halted and participants were asked to regain center position, indicated by a yellow floating sphere, before continuing with the task. Each movement condition was preceded by recording a three-minute baseline, during which the participants were instructed to stand still and to look straight ahead.

    Data Recordings: EEG. EEG data was recorded from 157 active electrodes with a sampling rate of 1000 Hz and band-pass filtered from 0.016 Hz to 500 Hz (BrainAmp Move System, Brain Products, Gilching, Germany). Using an elastic cap with an equidistant design (EASYCAP, Herrsching, Germany), 129 electrodes were placed on the scalp, and 28 electrodes were placed around the neck using a custom neckband (EASYCAP, Herrsching, Germany) in order to record neck muscle activity. Data were referenced to an electrode located closest to the standard position FCz. Impedances were kept below 10kΩ for standard locations on the scalp, and below 50kΩ for the neckband. Electrode locations were digitized using an optical tracking system (Polaris Vicra, NDI, Waterloo, ON, Canada).

    Data Recordings: Motion Capture. Two different motion capture data sources were used: 19 red active light-emitting diodes (LEDs) were captured using 31 cameras of the Impulse X2 System (PhaseSpace Inc., San Leandro, CA, USA) with a sampling rate of 90 Hz. They were placed on the feet (2 x 4 LEDs), around the hips (5 LEDs), on the shoulders (4 LEDs), and on the HTC Vive (2 LEDs; to account for an offset in yaw angle between the PhaseSpace and the HTC Vive tracking). Except for the two LEDs on the HTC Vive, they were subsequently grouped together to form rigid body parts of feet, hip, and shoulders, enabling tracking with six degrees of freedom (x, y, and z position and roll, yaw, and pitch orientation) per body part. Head motion capture data (position and orientation) was acquired using the HTC Lighthouse tracking system with 90Hz sampling rate, since it was also used for the positional tracking of the virtual reality view.

    The original data was recorded in .xdf format using labstreaminglayer (https://github.com/sccn/labstreaminglayer). It is stored in the /sourcedata directory. To comply with the BIDS format, the .xdf format was converted to BrainVision format (see the .eeg file for binary eeg data, the .vhdr as a text header filer containing meta data, and the .vmrk as a text file storing the eeg markers).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Kilford, Emma J.; Holmes, Emily A.; James, Ella L.; Deeprose, Catherine (2013). Experiment 1 means and statistics for age and baseline assessments indicating experimental conditions did not differ. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001637152

Experiment 1 means and statistics for age and baseline assessments indicating experimental conditions did not differ.

Explore at:
Dataset updated
Feb 20, 2013
Authors
Kilford, Emma J.; Holmes, Emily A.; James, Ella L.; Deeprose, Catherine
Description

Experiment 1 means and statistics for age and baseline assessments indicating experimental conditions did not differ.

Search
Clear search
Close search
Google apps
Main menu