CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is for the study of task decomposition effects in time estimation: the role of future boundaries and thought focus, and supplementary materials. Due to the previous research on the impact of task decomposition on time estimation, the role of time factors was often overlooked. For example, with the same decomposition, people subjectively set different time boundaries when facing difficult and easy tasks. Therefore, taking into account the time factor is bound to improve and integrate the research conclusions of decomposition effects. Based on this, we studied the impact of task decomposition and future boundaries on time estimation. Experiment 1 passed 2 (task decomposition/no decomposition) × Design an inter subject experiment with/without future boundaries, using the expected paradigm to measure the time estimation of participants; Experiment 2 further manipulates the time range of future boundaries based on Experiment 1, using 2 (task decomposition/non decomposition) × 3 (future boundaries of longer/shorter/medium range) inter subject experimental design, using expected paradigm to measure time estimation of subjects; On the basis of Experiment 2, Experiment 3 further verified the mechanism of the influence of the time range of future boundaries under decomposition conditions on time estimation. Through a single factor inter subject experimental design, a thinking focus scale was used to measure the thinking focus of participants under longer and shorter boundary conditions. Through the above experiments and measurements, we have obtained the following dataset. Experiment 1 Table Data Column Label Meaning: Task decomposition into grouped variables: 0 represents decomposition; 1 indicates no decomposition The future boundary is a grouping variable: 0 represents existence; 1 means it does not exist Zsco01: Standard score for estimating total task time A logarithm: The logarithmic value of the estimated time for all tasks Experiment 2 Table Data Column Label Meaning: The future boundary is a grouping variable: 7 represents shorter, 8 represents medium, and 9 represents longer The remaining data labels are the same as Experiment 1 Experiment 3 Table Data Column Label Meaning: Zplan represents the standard score for the focus plan score Zbar represents the standard score for attention barriers The future boundary is a grouping variable: 0 represents shorter, 1 represents longer
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset represents the results of the experimentation of a method for evaluating semantic similarity between concepts in a taxonomy. The method is based on the information-theoretic approach and allows senses of concepts in a given context to be considered. Relevance of senses is calculated in terms of semantic relatedness with the compared concepts. In a previous work [9], the adopted semantic relatedness method was the one described in [10], while in this work we also adopted the ones described in [11], [12], [13], [14], [15], and [16].
We applied our proposal by extending 7 methods for computing semantic similarity in a taxonomy, selected from the literature. The methods considered in the experiment are referred to as R[2], W&P[3], L[4], J&C[5], P&S[6], A[7], and A&M[8]
The experiment was run on the well-known Miller and Charles benchmark dataset [1] for assessing semantic similarity.
The results are organized in seven folders, each with the results related to one of the above semantic relatedness methods. In each folder there is a set of files, each referring to one pair of the Miller and Charles dataset. In fact, for each pair of concepts, all the 28 pairs are considered as possible different contexts.
REFERENCES [1] Miller G.A., Charles W.G. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1). [2] Resnik P. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Int. Joint Conf. on Artificial Intelligence, Montreal. [3] Wu Z., Palmer M. 1994. Verb semantics and lexical selection. 32nd Annual Meeting of the Associations for Computational Linguistics. [4] Lin D. 1998. An Information-Theoretic Definition of Similarity. Int. Conf. on Machine Learning. [5] Jiang J.J., Conrath D.W. 1997. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. Inter. Conf. Research on Computational Linguistics. [6] Pirrò G. 2009. A Semantic Similarity Metric Combining Features and Intrinsic Information Content. Data Knowl. Eng, 68(11). [7] Adhikari A., Dutta B., Dutta A., Mondal D., Singh S. 2018. An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology. J. Assoc. Inf. Sci. Technol. 69(8). [8] Adhikari A., Singh S., Mondal D., Dutta B., Dutta A. 2016. A Novel Information Theoretic Framework for Finding Semantic Similarity in WordNet. CoRR, arXiv:1607.05422, abs/1607.05422. [9] Formica A., Taglino F. 2021. An Enriched Information-Theoretic Definition of Semantic Similarity in a Taxonomy. IEEE Access, vol. 9. [10] Information Content-based approach [Schuhmacher and Ponzetto, 2014]. [11] Linked Data Semantic Distance (LDSD) [Passant, 2010]. [12] Wikipedia Link-based Measure (WLM ) [Witten and Milne, 2008]; [13] Linked Open Data Description Overlap-based approach (LODDO) [Zhou et al. 2012] [14] Exclusivity-based [Hulpuş et al 2015] [15] ASRMP [El Vaigh et al. 2020] [16] LDSDGN [Piao and Breslin, 2016]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An available dataset consists of a total of 77 female participants in two experiments (N = 34 and 43 for Experiment 1 and 2, respectively).Behavioral data includes accuracy and reaction time during the face classification tasks, attractiveness rating scores, and the self-assessment manikin (SAM) rating scores for the valence and arousal dimensions.Event-related potential data includes mean amplitudes of the N170 (120-170 ms), P200 (200-230 ms), early posterior negativity (EPN, 240-280 ms), P300 (300-500 ms), and late positive potential (LPP, 500-1000 ms) components.Datasets for Experiment 2 (i.e. experiment2_behavioral_data.csv and experiment2_erp_data.csv) have abbreviated headings. The following abbreviations have the following meaning:"smk" means Makeup × One’s own face condition."snm" means No makeup × One’s own face condition."omk" means Makeup × Another female’s face condition."onm" means No makeup × Another female’s face condition.The data was collected in Shiseido Global Innovation Center under the approval of the ethical committee of the Shiseido Global Innovation Center.
Tecplot (ascii) and matlab files are posted here for the Static pressure coefficient data sets. To download all of the data in either tecplot format or matlab format, you can go to https://c3.nasa.gov/dashlink/resources/485/ Please consult the documentation found on this page under Support/Documentation for information regarding variable definition, data processing, etc.
This data collection contains all the data used in our learning question classification experiments, which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features.
ABBR - 'abbreviation': expression abbreviated, etc. DESC - 'description and abstract concepts': manner of an action, description of sth. etc. ENTY - 'entities': animals, colors, events, food, etc. HUM - 'human beings': a group or organization of persons, an individual, etc. LOC - 'locations': cities, countries, etc. NUM - 'numeric values': postcodes, dates, speed,temperature, etc
https://cogcomp.seas.upenn.edu/Data/QA/QC/ https://github.com/Tony607/Keras-Text-Transfer-Learning/blob/master/README.md
This is digital research data corresponding to the manuscript, Reinhart, K.O., Vermeire, L.T. Precipitation Manipulation Experiments May Be Confounded by Water Source. J Soil Sci Plant Nutr (2023). https://doi.org/10.1007/s42729-023-01298-0 Files for a 3x2x2 factorial field experiment and water quality data used to create Table 1. Data for the experiment were used for the statistical analysis and generation of summary statistics for Figure 2. Purpose: This study aims to investigate the consequences of performing precipitation manipulation experiments with mineralized water in place of rainwater (i.e. demineralized water). Limited attention has been paid to the effects of water mineralization on plant and soil properties, even when the experiments are in a rainfed context. Methods: We conducted a 6-yr experiment with a gradient in spring rainfall (70, 100, and 130% of ambient). We tested effects of rainfall treatments on plant biomass and six soil properties and interpreted the confounding effects of dissolved solids in irrigation water. Results: Rainfall treatments affected all response variables. Sulfate was the most common dissolved solid in irrigation water and was 41 times more abundant in irrigated (i.e. 130% of ambient) than other plots. Soils of irrigated plots also had elevated iron (16.5 µg × 10 cm-2 × 60-d vs 8.9) and pH (7.0 vs 6.8). The rainfall gradient also had a nonlinear (hump-shaped) effect on plant available phosphorus (P). Plant and microbial biomasses are often limited by and positively associated with available P, suggesting the predicted positive linear relationship between plant biomass and P was confounded by additions of mineralized water. In other words, the unexpected nonlinear relationship was likely driven by components of mineralized irrigation water (i.e. calcium, iron) and/or shifts in soil pH that immobilized P. Conclusions: Our results suggest robust precipitation manipulation experiments should either capture rainwater when possible (or use demineralized water) or consider the confounding effects of mineralized water on plant and soil properties. Resources in this dataset: Resource Title: Readme file- Data dictionary File Name: README.txt Resource Description: File contains data dictionary to accompany data files for a research study. Resource Title: 3x2x2 factorial dataset.csv File Name: 3x2x2 factorial dataset.csv Resource Description: Dataset is for a 3x2x2 factorial field experiment (factors: rainfall variability, mowing seasons, mowing intensity) conducted in northern mixed-grass prairie vegetation in eastern Montana, USA. Data include activity of 5 plant available nutrients, soil pH, and plant biomass metrics. Data from 2018. Resource Title: water quality dataset.csv File Name: water quality dataset.csv Resource Description: Water properties (pH and common dissolved solids) of samples from Yellowstone River collected near Miles City, Montana. Data extracted from Rinella MJ, Muscha JM, Reinhart KO, Petersen MK (2021) Water quality for livestock in northern Great Plains rangelands. Rangeland Ecol. Manage. 75: 29-34.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Introduction
This repository hosts the Testing Roads for Autonomous VEhicLes (TRAVEL) dataset. TRAVEL is an extensive collection of virtual roads that have been used for testing lane assist/keeping systems (i.e., driving agents) and data from their execution in state of the art, physically accurate driving simulator, called BeamNG.tech. Virtual roads consist of sequences of road points interpolated using Cubic splines.
Along with the data, this repository contains instructions on how to install the tooling necessary to generate new data (i.e., test cases) and analyze them in the context of test regression. We focus on test selection and test prioritization, given their importance for developing high-quality software following the DevOps paradigms.
This dataset builds on top of our previous work in this area, including work on
test generation (e.g., AsFault, DeepJanus, and DeepHyperion) and the SBST CPS tool competition (SBST2021),
test selection: SDC-Scissor and related tool
test prioritization: automated test cases prioritization work for SDCs.
Dataset Overview
The TRAVEL dataset is available under the data folder and is organized as a set of experiments folders. Each of these folders is generated by running the test-generator (see below) and contains the configuration used for generating the data (experiment_description.csv), various statistics on generated tests (generation_stats.csv) and found faults (oob_stats.csv). Additionally, the folders contain the raw test cases generated and executed during each experiment (test..json).
The following sections describe what each of those files contains.
Experiment Description
The experiment_description.csv contains the settings used to generate the data, including:
Time budget. The overall generation budget in hours. This budget includes both the time to generate and execute the tests as driving simulations.
The size of the map. The size of the squared map defines the boundaries inside which the virtual roads develop in meters.
The test subject. The driving agent that implements the lane-keeping system under test. The TRAVEL dataset contains data generated testing the BeamNG.AI and the end-to-end Dave2 systems.
The test generator. The algorithm that generated the test cases. The TRAVEL dataset contains data obtained using various algorithms, ranging from naive and advanced random generators to complex evolutionary algorithms, for generating tests.
The speed limit. The maximum speed at which the driving agent under test can travel.
Out of Bound (OOB) tolerance. The test cases' oracle that defines the tolerable amount of the ego-car that can lie outside the lane boundaries. This parameter ranges between 0.0 and 1.0. In the former case, a test failure triggers as soon as any part of the ego-vehicle goes out of the lane boundary; in the latter case, a test failure triggers only if the entire body of the ego-car falls outside the lane.
Experiment Statistics
The generation_stats.csv contains statistics about the test generation, including:
Total number of generated tests. The number of tests generated during an experiment. This number is broken down into the number of valid tests and invalid tests. Valid tests contain virtual roads that do not self-intersect and contain turns that are not too sharp.
Test outcome. The test outcome contains the number of passed tests, failed tests, and test in error. Passed and failed tests are defined by the OOB Tolerance and an additional (implicit) oracle that checks whether the ego-car is moving or standing. Tests that did not pass because of other errors (e.g., the simulator crashed) are reported in a separated category.
The TRAVEL dataset also contains statistics about the failed tests, including the overall number of failed tests (total oob) and its breakdown into OOB that happened while driving left or right. Further statistics about the diversity (i.e., sparseness) of the failures are also reported.
Test Cases and Executions
Each test..json contains information about a test case and, if the test case is valid, the data observed during its execution as driving simulation.
The data about the test case definition include:
The road points. The list of points in a 2D space that identifies the center of the virtual road, and their interpolation using cubic splines (interpolated_points)
The test ID. The unique identifier of the test in the experiment.
Validity flag and explanation. A flag that indicates whether the test is valid or not, and a brief message describing why the test is not considered valid (e.g., the road contains sharp turns or the road self intersects)
The test data are organized according to the following JSON Schema and can be interpreted as RoadTest objects provided by the tests_generation.py module.
{ "type": "object", "properties": { "id": { "type": "integer" }, "is_valid": { "type": "boolean" }, "validation_message": { "type": "string" }, "road_points": { §\label{line:road-points}§ "type": "array", "items": { "$ref": "schemas/pair" }, }, "interpolated_points": { §\label{line:interpolated-points}§ "type": "array", "items": { "$ref": "schemas/pair" }, }, "test_outcome": { "type": "string" }, §\label{line:test-outcome}§ "description": { "type": "string" }, "execution_data": { "type": "array", "items": { "$ref" : "schemas/simulationdata" } } }, "required": [ "id", "is_valid", "validation_message", "road_points", "interpolated_points" ] }
Finally, the execution data contain a list of timestamped state information recorded by the driving simulation. State information is collected at constant frequency and includes absolute position, rotation, and velocity of the ego-car, its speed in Km/h, and control inputs from the driving agent (steering, throttle, and braking). Additionally, execution data contain OOB-related data, such as the lateral distance between the car and the lane center and the OOB percentage (i.e., how much the car is outside the lane).
The simulation data adhere to the following (simplified) JSON Schema and can be interpreted as Python objects using the simulation_data.py module.
{ "$id": "schemas/simulationdata", "type": "object", "properties": { "timer" : { "type": "number" }, "pos" : { "type": "array", "items":{ "$ref" : "schemas/triple" } } "vel" : { "type": "array", "items":{ "$ref" : "schemas/triple" } } "vel_kmh" : { "type": "number" }, "steering" : { "type": "number" }, "brake" : { "type": "number" }, "throttle" : { "type": "number" }, "is_oob" : { "type": "number" }, "oob_percentage" : { "type": "number" } §\label{line:oob-percentage}§ }, "required": [ "timer", "pos", "vel", "vel_kmh", "steering", "brake", "throttle", "is_oob", "oob_percentage" ] }
Dataset Content
The TRAVEL dataset is a lively initiative so the content of the dataset is subject to change. Currently, the dataset contains the data collected during the SBST CPS tool competition, and data collected in the context of our recent work on test selection (SDC-Scissor work and tool) and test prioritization (automated test cases prioritization work for SDCs).
SBST CPS Tool Competition Data
The data collected during the SBST CPS tool competition are stored inside data/competition.tar.gz. The file contains the test cases generated by Deeper, Frenetic, AdaFrenetic, and Swat, the open-source test generators submitted to the competition and executed against BeamNG.AI with an aggression factor of 0.7 (i.e., conservative driver).
Name
Map Size (m x m)
Max Speed (Km/h)
Budget (h)
OOB Tolerance (%)
Test Subject
DEFAULT
200 × 200
120
5 (real time)
0.95
BeamNG.AI - 0.7
SBST
200 × 200
70
2 (real time)
0.5
BeamNG.AI - 0.7
Specifically, the TRAVEL dataset contains 8 repetitions for each of the above configurations for each test generator totaling 64 experiments.
SDC Scissor
With SDC-Scissor we collected data based on the Frenetic test generator. The data is stored inside data/sdc-scissor.tar.gz. The following table summarizes the used parameters.
Name
Map Size (m x m)
Max Speed (Km/h)
Budget (h)
OOB Tolerance (%)
Test Subject
SDC-SCISSOR
200 × 200
120
16 (real time)
0.5
BeamNG.AI - 1.5
The dataset contains 9 experiments with the above configuration. For generating your own data with SDC-Scissor follow the instructions in its repository.
Dataset Statistics
Here is an overview of the TRAVEL dataset: generated tests, executed tests, and faults found by all the test generators grouped by experiment configuration. Some 25,845 test cases are generated by running 4 test generators 8 times in 2 configurations using the SBST CPS Tool Competition code pipeline (SBST in the table). We ran the test generators for 5 hours, allowing the ego-car a generous speed limit (120 Km/h) and defining a high OOB tolerance (i.e., 0.95), and we also ran the test generators using a smaller generation budget (i.e., 2 hours) and speed limit (i.e., 70 Km/h) while setting the OOB tolerance to a lower value (i.e., 0.85). We also collected some 5, 971 additional tests with SDC-Scissor (SDC-Scissor in the table) by running it 9 times for 16 hours using Frenetic as a test generator and defining a more realistic OOB tolerance (i.e., 0.50).
Generating new Data
Generating new data, i.e., test cases, can be done using the SBST CPS Tool Competition pipeline and the driving simulator BeamNG.tech.
Extensive instructions on how to install both software are reported inside the SBST CPS Tool Competition pipeline Documentation;
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
The Health Index is an Experimental Statistic to measure a broad definition of health, in a way that can be tracked over time and compared between different areas. These data are the provisional results of the Health Index for upper-tier local authorities in England, 2015 to 2018, to illustrate the type of analysis the Health Index can enable.
Local authority commissioners and health professionals can use these resources to track how many pregnant women, children and families in their local area have received health promoting reviews at particular points during pregnancy and childhood.
The data and commentaries also show variation at a local, regional and national level. This can help with planning, commissioning and improving local services.
The metrics cover health reviews for pregnant women, children and their families at several stages:
Public Health England (PHE) collects the data, which is submitted by local authorities on a voluntary basis.
See health visitor service delivery metrics in the child and maternal health statistics collection to access data for previous years.
Find guidance on using these statistics and other intelligence resources to help you make decisions about the planning and provision of child and maternal health services.
See health visitor service metrics and outcomes definitions from Community Services Dataset (CSDS).
This release is for quarters 1 to 4 of 2019 to 2020.
Local authority commissioners and health professionals can use these resources to track how many pregnant women, children and families in their local area have received health promoting reviews at particular points during pregnancy and childhood.
The data and commentaries also show variation at a local, regional and national level. This can help with planning, commissioning and improving local services.
The metrics cover health reviews for pregnant women, children and their families at several stages which are:
Public Health England (PHE) collects the data, which is submitted by local authorities on a voluntary basis.
See health visitor service delivery metrics in the child and maternal health statistics collection to access data for previous years.
Find guidance on using these statistics and other intelligence resources to help you make decisions about the planning and provision of child and maternal health services.
See health visitor service metrics and outcomes definitions from Community Services Dataset (CSDS).
Since publication in November 2020, Lewisham and Leicestershire councils have identified errors in the new birth visits within 14 days data it submitted to Public Health England (PHE) for 2019 to 2020 data. This error has caused a statistically significant change in the health visiting data for 2019 to 2020, and so the Office for Health Improvement and Disparities (OHID) has updated and reissued the data in OHID’s Fingertips tool.
A correction notice has been added to the 2019 to 2020 annual statistical release and statistical commentary but the data has not been altered.
Please consult OHID’s Fingertips tool for corrected data for Lewisham and Leicestershire, the London and East Midlands region, and England.
Included are data from triaxial, single-inclined-fracture friction experiments. The experiments were performed with slide-hold-slide protocol on Utah FORGE gneiss at increased temperature. With a ~10 MPa normal stress, temperatures vary between experiments from room temperature up to 163 Celsius. Hold times vary during experiment from ~10^1 to ~10^5 seconds. Measured are the frictional response upon reactivation after a hold period, active acoustic data (P-wave velocity and amplitude) and passive acoustic data (acoustic emission occurrence and amplitude). There are two types of datafiles: (1) Datafiles containing the friction data, including the temperature and the active acoustic data measured during the experiment (AEXX_Gneiss_Vp_mixref4). The underscore _Vp means that it includes the Vp or P-wave velocity data, with _mixref meaning that we use a mixed reference point for calculating the P-wave velocity. And (2) the datafiles containing the passive acoustics data, a catalog of the acoustic emissions (AE's) measured during the experiment (AEcatalog_AEXX_runX), where AEXX matches the experiment number and runX denotes which part of the experiment the data was collected, matching the times where active acoustic data was collected. AE catalogs are split in two parts when the file size exceeds 1 GB to aid download/opening times.
This item contains data and code used in experiments that produced the results for Sadler et. al (2022) (see below for full reference). We ran five experiments for the analysis, Experiment A, Experiment B, Experiment C, Experiment D, and Experiment AuxIn. Experiment A tested multi-task learning for predicting streamflow with 25 years of training data and using a different model for each of 101 sites. Experiment B tested multi-task learning for predicting streamflow with 25 years of training data and using a single model for all 101 sites. Experiment C tested multi-task learning for predicting streamflow with just 2 years of training data. Experiment D tested multi-task learning for predicting water temperature with over 25 years of training data. Experiment AuxIn used water temperature as an input variable for predicting streamflow. These experiments and their results are described in detail in the WRR paper. Data from a total of 101 sites across the US was used for the experiments. The model input data and streamflow data were from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) dataset (Newman et. al 2014, Addor et. al 2017). The water temperature data were gathered from the National Water Information System (NWIS) (U.S. Geological Survey, 2016). The contents of this item are broken into 13 files or groups of files aggregated into zip files:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset was generated as part of the CSE Research Project course. The topic of this research was assessing whether the CHSH game quantum network application should be included in the first ever quantum network benchmark suite. Hence, the main research question was whether the CHSH game application is sensitive in determine errors in the total quantum network system. To answer this question multiple experiment were performed using the SquidASM software development kit to simulate different quantum networks. This dataset contains all the raw data obtain from these experiments meaning the input and output values of each CHSH game. Each experiments assumes a perfect quantum network apart from a single property that is used as the independent variable. Then, using three performance metrics defined in the attached paper three plots are created in the workbook file. For more details, the full paper of the study can be found in the data link section.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The USDA-Agricultural Research Service Central Plains Experimental Range (CPER) is a Long-Term Agroecosystem Research (LTAR) network site located ~20 km northeast of Nunn, in north-central Colorado, USA. In 1939, scientists established the Long-term Grazing Intensity study (LTGI) with four replications of light, moderate, and heavy grazing. Each replication had three 129.5 ha pastures with the grazing intensity treatment randomly assigned. Today, one replication remains. Light grazing occurs in pasture 23W (9.3 Animal Unit Days (AUD)/ha, targeted for 20% utilization of peak growing-season biomass), moderate grazing in pasture 15E (12.5 AUD/ha, 40% utilization), and heavy grazing in pasture 23E (18.6 AUD/ha, 60% utilization). British- and continental-breed yearling cattle graze the pastures season-long from mid-May to October except when forage limitations shorten the grazing season. Individual raw data on cattle entry and exit weights, as well as weights every 28-days during the grazing season are available from 2000 to 2019. Cattle entry and exit weights are included in this dataset. Weight outliers (± 2 SD) are flagged for calculating summary statistics or performing statistical analysis. Resources in this dataset:Resource Title: Data Dictionary for LTGI Cattle weights on CPER (2000-2019). File Name: LTGI_2000-2019_data_dictionary.csvResource Description: Data dictionary for data from USDA ARS Central Plains Experimental Range (CPER) near Nunn, CO cattle weight gains managed with light, moderate and heavy grazing intensities Resource Title: LTGI Cattle weights on CPER (2000-2019). File Name: LTGI_2000-2019_all_weights_published.csvResource Description: Data from USDA ARS Central Plains Experimental Range (CPER) near Nunn, CO cattle weight gains managed with light, moderate and heavy grazing intensities
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets used in our experiments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
GX Dataset downsampled - Experiment 1
The GX Dataset is a dataset of combined tES, EEG, physiological, and behavioral signals from human subjects.
Here the GX Dataset for Experiment 1 is downsampled to 1 kHz and saved in .MAT format which can be used in both MATLAB and Python.
Publication
A full data descriptor is published in Nature Scientific Data. Please cite this work as:
Gebodh, N., Esmaeilpour, Z., Datta, A. et al. Dataset of concurrent EEG, ECG, and behavior with multiple doses of transcranial electrical stimulation. Sci Data 8, 274 (2021). https://doi.org/10.1038/s41597-021-01046-y
Descriptions
A dataset combining high-density electroencephalography (EEG) with physiological and continuous behavioral metrics during transcranial electrical stimulation (tES). Data includes within subject application of nine High-Definition tES (HD-tES) types targeted three brain regions (frontal, motor, parietal) with three waveforms (DC, 5Hz, 30Hz), with more than 783 total stimulation trials over 62 sessions with EEG, physiological (ECG, EOG), and continuous behavioral vigilance/alertness metrics.
Acknowledgments
Portions of this study were funded by X (formerly Google X), the Moonshot Factory. The funding source had no influence on study conduction or result evaluation. MB is further supported by grants from the National Institutes of Health: R01NS101362, R01NS095123, R01NS112996, R01MH111896, R01MH109289, and (to NG) NIH-G-RISE T32GM136499.
Extras
Back to Full GX Dataset : https://doi.org/10.5281/zenodo.4456079
For downsampled data (1 kHz ) please see (in .mat format):
Code used to import, process, and plot this dataset can be found here:
Additional figures for this project have been shared on Figshare. Trial-wise figures can be found here:
The full dataset is also provided in BIDS format here:
Data License
Creative Common 4.0 with attribution (CC BY 4.0)
NOTE
Please email ngebodh01@citymail.cuny.edu with any questions.
Updates
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Independent variables that characterise the performed task derived from a study qualified for further analysis.
https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
The Intelligent Building Agents (IBA) project is part of the Embedded Intelligence in Buildings Program in the Engineering Laboratory at the National Institute of Standards and Technology (NIST). A key part of the IBA Project is the IBA Laboratory (IBAL), a unique facility consisting of a mixed system of off the shelf equipment, including chillers and air handling units, controlled by a data acquisition system and capable of supporting building system optimization research under realistic and reproducible operating conditions. The database contains the values of approximately 300 sensors/actuators in the IBAL, including both sensor measurements and control actions, as well as approximately 850 process data, which are typically related to control settings and decisions. Each of the sensors/actuators has associated metadata. The metadata, sensors/actuators, and process data are defined on the "metadata", "sensors", and "parameters" tabs in the definitions file. Data are collected every 10 s. The database contains two dashboards: 1) Experiments - select data from individual experiments and 2) Measurements - select individual sensor/actuator and parameter data. The Experiments Dashboard contains three sections. The "Experiment Data Plot" shows plots of the sensor/actuator data selected in the second section, "Experiment/Metadata". There are plots of both scaled and raw data (see the meta data file for the conversion from raw to scaled data). Underneath the plots is a "Download CSV" button; select that button and a csv file of the data in the plot is automatically generated. In "Experiment/Metadata", first select an "Experiment" from the options in the table on the left. A specific experiment or type of experiment can be found by entering terms in the search box. For example, searching for the word "Charge" will bring up experiments in which the ice thermal storage tank is charged. The table of experiments also includes the duration of the experiment in minutes. Once an experiment is selected, specific sensor/actuator data points can be selected from the "Measurements" table on the right. These data can be filtered by subsystem (e.g., primary loop, secondary loop, Chiller1) and/or measurement type (e.g., pressure, flow, temperature). These data will then be shown in the plots at the top. The final section, "Process", contains the process data, which are shown by the subsystem. These data are not shown in the plots but can be downloaded by selecting the "Download CSV" button in the "Process" section. The Measurements Dashboard contains three sections. The "Date Range" section is used to select the time range of the data. The "All Measurements" section is used to select specific sensor/actuator data. As in the Experiments Dashboard, these data can be filtered by subsystem and/or measurement type. The scaled and raw values of the selected data are then plotted in the "Historical Data Plot" section. The "Download CSV" button underneath the plots will automatically download the selected data.
This document presents the Concise Experiment Plan for NASA's Arctic-Boreal Vulnerability Experiment (ABoVE) to serve as a guide to the Program as it identifies the research to be conducted under this study. Research for ABoVE will link field-based, process-level studies with geospatial data products derived from airborne and satellite remote sensing, providing a foundation for improving the analysis and modeling capabilities needed to understand and predict ecosystem responses and societal implications. The ABoVE Concise Experiment Plan (ACEP) outlines the conceptual basis for the Field Campaign and expresses the compelling rationale explaining the scientific and societal importance of the study. It presents both the science questions driving ABoVE research as well as the top-level requirements for a study design to address them.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
There are six datasets in this collection. Common symbols and abbreviations used in the datasets are defined in the resource titled, "Symbols and Abbreviations for Bushland, TX, Weighing Lysimeter Datasets". Datasets consist of Excel (xlsx) files. Each xlsx file contains an Introductory tab that explains the other tabs, lists the authors, describes conventions and symbols used and lists any instruments used. The remaining tabs in a file consist of dictionary and data tabs. The six datasets are as follows: Agronomic Calendars for the Bushland, Texas Winter Wheat Datasets Growth and Yield Data for the Bushland, Texas Winter Wheat Datasets Weighing Lysimeter Data for The Bushland, Texas Winter Wheat Datasets Soil Water Content Data for The Bushland, Texas, Large Weighing Lysimeter Experiments Evapotranspiration, Irrigation, Dew/frost - Water Balance Data for The Bushland, Texas Winter Wheat Datasets Standard Quality Controlled Research Weather Data – USDA-ARS, Bushland, Texas See the README for descriptions of each dataset. The soil is a Pullman series fine, mixed, superactive, thermic Torrertic Paleustoll. Soil properties are given in the resource titled "Soil Properties for the Bushland, TX, Weighing Lysimeter Datasets". The land slope in the lysimeter fields is Resources in this dataset: Resource Title: Geographic Coordinates of Experimental Assets, Weighing Lysimeter Experiments, USDA, ARS, Bushland, Texas. File Name: Geographic Coordinates, USDA, ARS, Bushland, Texas.xlsx. Resource Description: The file gives the UTM latitude and longitude of important experimental assets of the Bushland, Texas, USDA, ARS, Conservation & Production Research Laboratory (CPRL). Locations include weather stations [Soil and Water Management Research Unit (SWMRU) and CPRL], large weighing lysimeters, and corners of fields within which each lysimeter was centered. There were four fields designated NE, SE, NW, and SW, and a weighing lysimeter was centered in each field. The SWMRU weather station was adjacent to and immediately east of the NE and SE lysimeter fields.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is for the study of task decomposition effects in time estimation: the role of future boundaries and thought focus, and supplementary materials. Due to the previous research on the impact of task decomposition on time estimation, the role of time factors was often overlooked. For example, with the same decomposition, people subjectively set different time boundaries when facing difficult and easy tasks. Therefore, taking into account the time factor is bound to improve and integrate the research conclusions of decomposition effects. Based on this, we studied the impact of task decomposition and future boundaries on time estimation. Experiment 1 passed 2 (task decomposition/no decomposition) × Design an inter subject experiment with/without future boundaries, using the expected paradigm to measure the time estimation of participants; Experiment 2 further manipulates the time range of future boundaries based on Experiment 1, using 2 (task decomposition/non decomposition) × 3 (future boundaries of longer/shorter/medium range) inter subject experimental design, using expected paradigm to measure time estimation of subjects; On the basis of Experiment 2, Experiment 3 further verified the mechanism of the influence of the time range of future boundaries under decomposition conditions on time estimation. Through a single factor inter subject experimental design, a thinking focus scale was used to measure the thinking focus of participants under longer and shorter boundary conditions. Through the above experiments and measurements, we have obtained the following dataset. Experiment 1 Table Data Column Label Meaning: Task decomposition into grouped variables: 0 represents decomposition; 1 indicates no decomposition The future boundary is a grouping variable: 0 represents existence; 1 means it does not exist Zsco01: Standard score for estimating total task time A logarithm: The logarithmic value of the estimated time for all tasks Experiment 2 Table Data Column Label Meaning: The future boundary is a grouping variable: 7 represents shorter, 8 represents medium, and 9 represents longer The remaining data labels are the same as Experiment 1 Experiment 3 Table Data Column Label Meaning: Zplan represents the standard score for the focus plan score Zbar represents the standard score for attention barriers The future boundary is a grouping variable: 0 represents shorter, 1 represents longer