74 datasets found
  1. f

    Stimuli used in the experiment (see main text for the definition of the...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Jan 21, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Segev, Ronen; Schneidman, Elad; Tkačik, Gašper; Ghosh, Anandamohan (2014). Stimuli used in the experiment (see main text for the definition of the statistics ). [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001226171
    Explore at:
    Dataset updated
    Jan 21, 2014
    Authors
    Segev, Ronen; Schneidman, Elad; Tkačik, Gašper; Ghosh, Anandamohan
    Description

    The shorthand symbol for the stimulus starts with the C/S/K (for contrast, skew, kurtosis) and is followed by −,−−,+,++ (small magnitude and negative, large magnitude and negative, small magnitude and positive, large magnitude and positive); therefore, C+,C++,S−−,S−,S+,S++,K−−,K−,K+. Parameters in the table denoted in bold were varied in each of the three stimulus categories.

  2. Data from: Precipitation manipulation experiments may be confounded by water...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Apr 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: Precipitation manipulation experiments may be confounded by water source [Dataset]. https://catalog.data.gov/dataset/data-from-precipitation-manipulation-experiments-may-be-confounded-by-water-source-7d7bc
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    This is digital research data corresponding to the manuscript, Reinhart, K.O., Vermeire, L.T. Precipitation Manipulation Experiments May Be Confounded by Water Source. J Soil Sci Plant Nutr (2023). https://doi.org/10.1007/s42729-023-01298-0 Files for a 3x2x2 factorial field experiment and water quality data used to create Table 1. Data for the experiment were used for the statistical analysis and generation of summary statistics for Figure 2. Purpose: This study aims to investigate the consequences of performing precipitation manipulation experiments with mineralized water in place of rainwater (i.e. demineralized water). Limited attention has been paid to the effects of water mineralization on plant and soil properties, even when the experiments are in a rainfed context. Methods: We conducted a 6-yr experiment with a gradient in spring rainfall (70, 100, and 130% of ambient). We tested effects of rainfall treatments on plant biomass and six soil properties and interpreted the confounding effects of dissolved solids in irrigation water. Results: Rainfall treatments affected all response variables. Sulfate was the most common dissolved solid in irrigation water and was 41 times more abundant in irrigated (i.e. 130% of ambient) than other plots. Soils of irrigated plots also had elevated iron (16.5 µg × 10 cm-2 × 60-d vs 8.9) and pH (7.0 vs 6.8). The rainfall gradient also had a nonlinear (hump-shaped) effect on plant available phosphorus (P). Plant and microbial biomasses are often limited by and positively associated with available P, suggesting the predicted positive linear relationship between plant biomass and P was confounded by additions of mineralized water. In other words, the unexpected nonlinear relationship was likely driven by components of mineralized irrigation water (i.e. calcium, iron) and/or shifts in soil pH that immobilized P. Conclusions: Our results suggest robust precipitation manipulation experiments should either capture rainwater when possible (or use demineralized water) or consider the confounding effects of mineralized water on plant and soil properties. Resources in this dataset: Resource Title: Readme file- Data dictionary File Name: README.txt Resource Description: File contains data dictionary to accompany data files for a research study. Resource Title: 3x2x2 factorial dataset.csv File Name: 3x2x2 factorial dataset.csv Resource Description: Dataset is for a 3x2x2 factorial field experiment (factors: rainfall variability, mowing seasons, mowing intensity) conducted in northern mixed-grass prairie vegetation in eastern Montana, USA. Data include activity of 5 plant available nutrients, soil pH, and plant biomass metrics. Data from 2018. Resource Title: water quality dataset.csv File Name: water quality dataset.csv Resource Description: Water properties (pH and common dissolved solids) of samples from Yellowstone River collected near Miles City, Montana. Data extracted from Rinella MJ, Muscha JM, Reinhart KO, Petersen MK (2021) Water quality for livestock in northern Great Plains rangelands. Rangeland Ecol. Manage. 75: 29-34.

  3. Micro Terraforming Data

    • kaggle.com
    zip
    Updated Jul 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bechitra Kumar Paul (2023). Micro Terraforming Data [Dataset]. https://www.kaggle.com/datasets/bechitra/micro-terraforming-data
    Explore at:
    zip(141457141 bytes)Available download formats
    Dataset updated
    Jul 15, 2023
    Authors
    Bechitra Kumar Paul
    Description

    This dataset simulates a series of experiments carried out in a controlled biosphere for the growth of a specific plant. Each row represents one day in a specific experiment.

    The dataset contains 19 columns:

    1. experiment_id: A unique identifier for each experiment. Integer values from 0 to 9999, as there are 10,000 unique experiments.

    2. day: The day of the experiment, from 0 to 99, since each experiment lasts 100 days.

    3. temperature: The average temperature of the biosphere for that day, measured in degrees Celsius. It's a normally distributed random variable with a mean of 20 and a standard deviation of 5.

    4. humidity: The average humidity of the biosphere for that day, measured in percentage. It's uniformly distributed between 30% and 80%.

    5. light_exposure: The number of hours of light exposure for the plants on that day. It's uniformly distributed between 0 and 12 hours.

    6. soil_composition: The type of soil used in the biosphere, categorized as either 'sandy', 'clay', or 'loamy'.

    7. soil_nutrients: The normalized concentration of nutrients in the soil, ranging between 0 and 1. It's uniformly distributed.

    8. control_temperature: The control applied to the temperature on that day, as a normalized change. It's a normally distributed random variable with a mean of 0 and a standard deviation of 1.

    9. control_humidity: The control applied to the humidity on that day, as a normalized change. It's a normally distributed random variable with a mean of 0 and a standard deviation of 1.

    10. control_light_exposure: The control applied to the light exposure on that day, as a normalized change. It's a normally distributed random variable with a mean of 0 and a standard deviation of 1.

    11. control_soil_nutrients: The control applied to the soil nutrients on that day, as a normalized change. It's a normally distributed random variable with a mean of 0 and a standard deviation of 1.

    12. water_used: The amount of water used on that day, as a normalized quantity. It's an absolute value of a normally distributed random variable with a mean of 0 and a standard deviation of 1.

    13. power_used: The amount of power used on that day, as a normalized quantity. It's an absolute value of a normally distributed random variable with a mean of 0 and a standard deviation of 1.

    14. interaction_temp_light: A feature representing the interaction between temperature and light_exposure.

    15. interaction_humidity_nutrients: A feature representing the interaction between humidity and soil_nutrients.

    16. nonlinear_control_temp: A feature representing the square of control_temperature, capturing non-linear effects of temperature control.

    17. nonlinear_control_light: A feature representing the square root of the absolute value of control_light_exposure, capturing non-linear effects of light control.

    18. plant_growth: The cumulative growth score of the plant on that day, as a normalized value ranging from 0 to 1. The growth score is a complex function of the environmental conditions, control methods, resources used, and their interactions.

  4. m

    Semantic Similarity with Concept Senses: new Experiment

    • data.mendeley.com
    Updated Oct 24, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Taglino (2022). Semantic Similarity with Concept Senses: new Experiment [Dataset]. http://doi.org/10.17632/v2bwh7z8kj.1
    Explore at:
    Dataset updated
    Oct 24, 2022
    Authors
    Francesco Taglino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset represents the results of the experimentation of a method for evaluating semantic similarity between concepts in a taxonomy. The method is based on the information-theoretic approach and allows senses of concepts in a given context to be considered. Relevance of senses is calculated in terms of semantic relatedness with the compared concepts. In a previous work [9], the adopted semantic relatedness method was the one described in [10], while in this work we also adopted the ones described in [11], [12], [13], [14], [15], and [16].

    We applied our proposal by extending 7 methods for computing semantic similarity in a taxonomy, selected from the literature. The methods considered in the experiment are referred to as R[2], W&P[3], L[4], J&C[5], P&S[6], A[7], and A&M[8]

    The experiment was run on the well-known Miller and Charles benchmark dataset [1] for assessing semantic similarity.

    The results are organized in seven folders, each with the results related to one of the above semantic relatedness methods. In each folder there is a set of files, each referring to one pair of the Miller and Charles dataset. In fact, for each pair of concepts, all the 28 pairs are considered as possible different contexts.

    REFERENCES [1] Miller G.A., Charles W.G. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1). [2] Resnik P. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Int. Joint Conf. on Artificial Intelligence, Montreal. [3] Wu Z., Palmer M. 1994. Verb semantics and lexical selection. 32nd Annual Meeting of the Associations for Computational Linguistics. [4] Lin D. 1998. An Information-Theoretic Definition of Similarity. Int. Conf. on Machine Learning. [5] Jiang J.J., Conrath D.W. 1997. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. Inter. Conf. Research on Computational Linguistics. [6] Pirrò G. 2009. A Semantic Similarity Metric Combining Features and Intrinsic Information Content. Data Knowl. Eng, 68(11). [7] Adhikari A., Dutta B., Dutta A., Mondal D., Singh S. 2018. An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology. J. Assoc. Inf. Sci. Technol. 69(8). [8] Adhikari A., Singh S., Mondal D., Dutta B., Dutta A. 2016. A Novel Information Theoretic Framework for Finding Semantic Similarity in WordNet. CoRR, arXiv:1607.05422, abs/1607.05422. [9] Formica A., Taglino F. 2021. An Enriched Information-Theoretic Definition of Semantic Similarity in a Taxonomy. IEEE Access, vol. 9. [10] Information Content-based approach [Schuhmacher and Ponzetto, 2014]. [11] Linked Data Semantic Distance (LDSD) [Passant, 2010]. [12] Wikipedia Link-based Measure (WLM ) [Witten and Milne, 2008]; [13] Linked Open Data Description Overlap-based approach (LODDO) [Zhou et al. 2012] [14] Exclusivity-based [Hulpuş et al 2015] [15] ASRMP [El Vaigh et al. 2020] [16] LDSDGN [Piao and Breslin, 2016]

  5. S

    Data set on Task unpacking effects in time estimation: The role of future...

    • scidb.cn
    Updated Dec 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shizifu; xia bi qi; Liu Xin (2023). Data set on Task unpacking effects in time estimation: The role of future boundaries and thought focus [Dataset]. http://doi.org/10.57760/sciencedb.j00052.00202
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 1, 2023
    Dataset provided by
    Science Data Bank
    Authors
    Shizifu; xia bi qi; Liu Xin
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset is for the study of task decomposition effects in time estimation: the role of future boundaries and thought focus, and supplementary materials. Due to the previous research on the impact of task decomposition on time estimation, the role of time factors was often overlooked. For example, with the same decomposition, people subjectively set different time boundaries when facing difficult and easy tasks. Therefore, taking into account the time factor is bound to improve and integrate the research conclusions of decomposition effects. Based on this, we studied the impact of task decomposition and future boundaries on time estimation. Experiment 1 passed 2 (task decomposition/no decomposition) × Design an inter subject experiment with/without future boundaries, using the expected paradigm to measure the time estimation of participants; Experiment 2 further manipulates the time range of future boundaries based on Experiment 1, using 2 (task decomposition/non decomposition) × 3 (future boundaries of longer/shorter/medium range) inter subject experimental design, using expected paradigm to measure time estimation of subjects; On the basis of Experiment 2, Experiment 3 further verified the mechanism of the influence of the time range of future boundaries under decomposition conditions on time estimation. Through a single factor inter subject experimental design, a thinking focus scale was used to measure the thinking focus of participants under longer and shorter boundary conditions. Through the above experiments and measurements, we have obtained the following dataset. Experiment 1 Table Data Column Label Meaning: Task decomposition into grouped variables: 0 represents decomposition; 1 indicates no decomposition The future boundary is a grouping variable: 0 represents existence; 1 means it does not exist Zsco01: Standard score for estimating total task time A logarithm: The logarithmic value of the estimated time for all tasks Experiment 2 Table Data Column Label Meaning: The future boundary is a grouping variable: 7 represents shorter, 8 represents medium, and 9 represents longer The remaining data labels are the same as Experiment 1 Experiment 3 Table Data Column Label Meaning: Zplan represents the standard score for the focus plan score Zbar represents the standard score for attention barriers The future boundary is a grouping variable: 0 represents shorter, 1 represents longer

  6. First ISCCP Regional Experiment (FIRE) Atlantic Stratocumulus Transition...

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). First ISCCP Regional Experiment (FIRE) Atlantic Stratocumulus Transition Experiment (ASTEX) ECMWF Mean Velocity Data - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/first-isccp-regional-experiment-fire-atlantic-stratocumulus-transition-experiment-astex-ec-2fcd9
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    A special set of analysis products for the Atlantic Stratocumulus Transition Experiment (ASTEX) region during June 1-28, 1992 was prepared by Ernst Klinker and Tony Hollingsworth of the European Centre for Medium-range Forecasting (ECMWF), and reformatted by Chris Bretherton of Univ. of Washington. These analyses, or more correctly initializations and very short range forecasts using the ECMWF T213L30 operational model, incorporate routine observations from the global network and special soundings from ASTEX that were sent to ECMWFduring ASTEX via the GTS telecommunication system. About 650 special soundings were incorporated, including nearly all soundings from Santa Maria, Porto Santo, and the French ship Le Suroit, most of the soundings taken on the Valdivia and Malcolm Baldridge, and almost none of the soundings from the Oceanus. Surface reports from the research ships were also incorporated into the analyses after the first week of the experiment. Aircraft soundings were not included in the analyses. ECMWF has requested that anyone making use of this data set acknowledge them, and that those investigators publishing researchthat makes more than casual use of this data set contact Ernst Klinker or Tony Hollingsworth.The data have been decoded by Chris Bretherton into ASCII files, one for each horizontal field at a given level and base time. All data have the same horizontal resolution of 1.25 degrees in latitude and longitude and correspond to base (initialization) times of 00, 06, 12, or 18Z. Different fields have different lat/lon ranges and sets of available vertical levels, as tabulated below. Also, some fields are instantaneous (I) while others are accumulated (A) over the first 6 hours of a forecast initialized at the base time. This is tabulated in the 'time range' column below. Instantaneous fields are bestcompared with data at the base time, while accumulated fields are best compared with data three hours after the base time.Data Set Name ECMWF ECMWF Time Field Units field ID# range abbrev.----------- ------ ----- ----- ----- -----MEANW MVV 232 A Mean vertical velocity Pa/s(lat/lon range: 85W to 15E, 70N to 10N)(levels: 1010,1000,975,950,925,900,875,850,825,800,775,750,700,650,600,550,500,400,300,200,100 hPa)The ECMWF field abbreviation, ID#, field description and units aretaken directly from ECMWF Code Table 2, in case you ever need toconsult with ECMWF about this data set.

  7. HIRENASD Experimental Data, Static Cp Plots and Data files

    • data.nasa.gov
    • s.cnmilf.com
    • +2more
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). HIRENASD Experimental Data, Static Cp Plots and Data files [Dataset]. https://data.nasa.gov/dataset/hirenasd-experimental-data-static-cp-plots-and-data-files
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Tecplot (ascii) and matlab files are posted here for the Static pressure coefficient data sets. To download all of the data in either tecplot format or matlab format, you can go to https://c3.nasa.gov/dashlink/resources/485/ Please consult the documentation found on this page under Support/Documentation for information regarding variable definition, data processing, etc.

  8. m

    Dataset on the moderating role of self-construal on the watching-eyes effect...

    • data.mendeley.com
    Updated Jan 4, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ziye Wang (2020). Dataset on the moderating role of self-construal on the watching-eyes effect in prosociality [Dataset]. http://doi.org/10.17632/nx84xryt7b.1
    Explore at:
    Dataset updated
    Jan 4, 2020
    Authors
    Ziye Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data were composed of datasets from four experiments, a meta-analysis, and a subgroup analysis. The total sample size was 481 participants. There are six Excel workbooks of the datasets, each of which consists of two worksheets for database and statement, respectively (refer to the ZIP file in Appendix A). The first four sheets are for the four experiments, respectively. In the sheet for each experiment, each row represents a participant. It is important to note that the sheet also contains data for excluded participants, which are marked by gray shadow. Each column represents one of the experimental variables, consisting of age, gender, cues, self-construal, allocation amount (i.e., indicator of prosociality), perceived anonymity, etc. The last two sheets are for the meta-analysis and the subgroup analysis, respectively. The meta-analysis and the subgroup analysis used the same participants that were recruited in the analyses of the four prior experiments. For the meta-analysis (see “5 Meta-analysis” in Appendix A for database), the mean, standard deviation and sample size of each experiment were extracted respectively and organized into a single excel sheet for further calculation. The rows indicate the experiments and the columns indicate related summaries including the experiment number, sample size, mean and standard deviation for the experimental (eye) condition, sample size, and mean and standard deviation for the control condition. For the subgroup analysis (see “6 Subgroup analysis” in Appendix A), the participants of each experiment were further segmented into an independence subgroup and an interdependence subgroup according to the measurement or the manipulation of self-construal. The mean, standard deviation, and sample size were then extracted respectively and organized into a single excel sheet for further calculation. The rows indicate the subgroups and the columns indicate related summaries including subgroup number, sample size, mean and standard deviation for the experimental (eye) condition, sample size, mean and standard deviation for the control condition, and the subgroup assignment (i.e., 1 = independent self-construal; 2 = interdependent self-construal).

  9. The mean number of components used for the experimental data sets.

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anna Telaar; Kristian Hovde Liland; Dirk Repsilber; Gerd Nürnberg (2023). The mean number of components used for the experimental data sets. [Dataset]. http://doi.org/10.1371/journal.pone.0055267.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Anna Telaar; Kristian Hovde Liland; Dirk Repsilber; Gerd Nürnberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The mean number of components used for the experimental data sets.

  10. d

    Data from: Multi-task Deep Learning for Water Temperature and Streamflow...

    • catalog.data.gov
    Updated Nov 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Multi-task Deep Learning for Water Temperature and Streamflow Prediction (ver. 1.1, June 2022) [Dataset]. https://catalog.data.gov/dataset/multi-task-deep-learning-for-water-temperature-and-streamflow-prediction-ver-1-1-june-2022
    Explore at:
    Dataset updated
    Nov 11, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This item contains data and code used in experiments that produced the results for Sadler et. al (2022) (see below for full reference). We ran five experiments for the analysis, Experiment A, Experiment B, Experiment C, Experiment D, and Experiment AuxIn. Experiment A tested multi-task learning for predicting streamflow with 25 years of training data and using a different model for each of 101 sites. Experiment B tested multi-task learning for predicting streamflow with 25 years of training data and using a single model for all 101 sites. Experiment C tested multi-task learning for predicting streamflow with just 2 years of training data. Experiment D tested multi-task learning for predicting water temperature with over 25 years of training data. Experiment AuxIn used water temperature as an input variable for predicting streamflow. These experiments and their results are described in detail in the WRR paper. Data from a total of 101 sites across the US was used for the experiments. The model input data and streamflow data were from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) dataset (Newman et. al 2014, Addor et. al 2017). The water temperature data were gathered from the National Water Information System (NWIS) (U.S. Geological Survey, 2016). The contents of this item are broken into 13 files or groups of files aggregated into zip files:

    1. input_data_processing.zip: A zip file containing the scripts used to collate the observations, input weather drivers, and catchment attributes for the multi-task modeling experiments
    2. flow_observations.zip: A zip file containing collated daily streamflow data for the sites used in multi-task modeling experiments. The streamflow data were originally accessed from the CAMELs dataset. The data are stored in csv and Zarr formats.
    3. temperature_observations.zip: A zip file containing collated daily water temperature data for the sites used in multi-task modeling experiments. The data were originally accessed via NWIS. The data are stored in csv and Zarr formats.
    4. temperature_sites.geojson: Geojson file of the locations of the water temperature and streamflow sites used in the analysis.
    5. model_drivers.zip: A zip file containing the daily input weather driver data for the multi-task deep learning models. These data are from the Daymet drivers and were collated from the CAMELS dataset. The data are stored in csv and Zarr formats.
    6. catchment_attrs.csv: Catchment attributes collatted from the CAMELS dataset. These data are used for the Random Forest modeling. For full metadata regarding these data see CAMELS dataset.
    7. experiment_workflow_files.zip: A zip file containing workflow definitions used to run multi-task deep learning experiments. These are Snakemake workflows. To run a given experiment, one would run (for experiment A) 'snakemake -s expA_Snakefile --configfile expA_config.yml'
    8. river-dl-paper_v0.zip: A zip file containing python code used to run multi-task deep learning experiments. This code was called by the Snakemake workflows contained in 'experiment_workflow_files.zip'.
    9. random_forest_scripts.zip: A zip file containing Python code and a Python Jupyter Notebook used to prepare data for, train, and visualize feature importance of a Random Forest model.
    10. plotting_code.zip: A zip file containing python code and Snakemake workflow used to produce figures showing the results of multi-task deep learning experiments.
    11. results.zip: A zip file containing results of multi-task deep learning experiments. The results are stored in csv and netcdf formats. The netcdf files were used by the plotting libraries in 'plotting_code.zip'. These files are for five experiments, 'A', 'B', 'C', 'D', and 'AuxIn'. These experiment names are shown in the file name.
    12. sample_scripts.zip: A zip file containing scripts for creating sample output to demonstrate how the modeling workflow was executed.
    13. sample_output.zip: A zip file containing sample output data. Similar files are created by running the sample scripts provided.
    A. Newman; K. Sampson; M. P. Clark; A. Bock; R. J. Viger; D. Blodgett, 2014. A large-sample watershed-scale hydrometeorological dataset for the contiguous USA. Boulder, CO: UCAR/NCAR. https://dx.doi.org/10.5065/D6MW2F4D

    N. Addor, A. Newman, M. Mizukami, and M. P. Clark, 2017. Catchment attributes for large-sample studies. Boulder, CO: UCAR/NCAR. https://doi.org/10.5065/D6G73C3Q

    Sadler, J. M., Appling, A. P., Read, J. S., Oliver, S. K., Jia, X., Zwart, J. A., & Kumar, V. (2022). Multi-Task Deep Learning of Daily Streamflow and Water Temperature. Water Resources Research, 58(4), e2021WR030138. https://doi.org/10.1029/2021WR030138

    U.S. Geological Survey, 2016, National Water Information System data available on the World Wide Web (USGS Water Data for the Nation), accessed Dec. 2020.

  11. Dataset for: Experiment for validation of fluid-structure interaction models...

    • wiley.figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Hessenthaler; N Gaddum; Ondrej Holub; Ralph Sinkus; Oliver Röhrle; David Nordsletten (2023). Dataset for: Experiment for validation of fluid-structure interaction models and algorithms [Dataset]. http://doi.org/10.6084/m9.figshare.4141836.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Andreas Hessenthaler; N Gaddum; Ondrej Holub; Ralph Sinkus; Oliver Röhrle; David Nordsletten
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using CAD tools. The experimental design aimed at providing a straight-forward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by employing magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion.

  12. d

    Data from: Data release for mean random reflectance for products of hydrous...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Oct 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Data release for mean random reflectance for products of hydrous pyrolysis experiments on artificial rock mixtures of humic Wyodak-Anderson coal (2018) [Dataset]. https://catalog.data.gov/dataset/data-release-for-mean-random-reflectance-for-products-of-hydrous-pyrolysis-experiments-on-
    Explore at:
    Dataset updated
    Oct 29, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Wyodak
    Description

    Mean random vitrinite reflectance (Ro) is the most widely accepted method to determine thermal maturity of coal and other sedimentary rocks. However, oil-immersion Ro of polished rock or kerogen samples is commonly lower than Ro values measured in samples from adjacent vitrinite-rich coals that have undergone the same level of thermal stress. So-called suppressed Ro values have also been observed in hydrous pyrolysis experiments designed to simulate petroleum formation. Various hypotheses to explain Ro suppression, such as sorption of products generated from liptinite during maturation, diagenetic formation of perhydrous vitrinite or overpressure, remain controversial. To experimentally test for suppression of vitrinite reflectance, artificial rock was prepared using silica and a calcined blend of limestone and clay with various proportions of thermally immature vitrinite-rich Wyodak-Anderson coal and liptinite-rich kerogen isolated from the oil-prone Parachute Creek Member of the Green River Formation. The samples were subjected to hydrous pyrolysis for 72 hr. at isothermal temperatures of 300 C, 330 C, and 350 C to simulate burial maturation. Compared to artificial rock that contains only coal, samples with different proportions of oil-prone kerogen show distinct suppression of calibrated Ro at 300 C and 330 C. The reflectance of solid bitumen generated during heating of the samples is lower than that of the associated vitrinite and does not interfere with the Ro measurements. These results provide the first experimental evidence that Ro suppression occurs in vitrinite mixed with liptinite-rich kerogen in a rock matrix. Although the precise chemical mechanism for Ro suppression by liptinite remains unclear, free radicals generated from solid bitumen and associated volatile products during maturation of liptinite may contribute to termination reactions that slow the aromatization and rearrangement of polyaromatic sheets in vitrinite, thus suppressing Ro. This mechanism does not preclude Ro suppression that might result from overpressure or differences in redox conditions during diagenesis.

  13. Intelligent Building Agents Project Data

    • catalog.data.gov
    • data.nist.gov
    Updated Sep 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2023). Intelligent Building Agents Project Data [Dataset]. https://catalog.data.gov/dataset/intelligent-building-agents-project-data
    Explore at:
    Dataset updated
    Sep 30, 2023
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    The Intelligent Building Agents (IBA) project is part of the Embedded Intelligence in Buildings Program in the Engineering Laboratory at the National Institute of Standards and Technology (NIST). A key part of the IBA Project is the IBA Laboratory (IBAL), a unique facility consisting of a mixed system of off the shelf equipment, including chillers and air handling units, controlled by a data acquisition system and capable of supporting building system optimization research under realistic and reproducible operating conditions.The database contains the values of approximately 300 sensors/actuators in the IBAL, including both sensor measurements and control actions, as well as approximately 850 process data, which are typically related to control settings and decisions. Each of the sensors/actuators has associated metadata. The metadata, sensors/actuators, and process data are defined on the "metadata", "sensors", and "parameters" tabs in the definitions file. Data are collected every 10 s.The database contains two dashboards: 1) Experiments - select data from individual experiments and 2) Measurements - select individual sensor/actuator and parameter data. The Experiments Dashboard contains three sections. The "Experiment Data Plot" shows plots of the sensor/actuator data selected in the second section, "Experiment/Metadata". There are plots of both scaled and raw data (see the meta data file for the conversion from raw to scaled data). Underneath the plots is a "Download CSV" button; select that button and a csv file of the data in the plot is automatically generated. In "Experiment/Metadata", first select an "Experiment" from the options in the table on the left. A specific experiment or type of experiment can be found by entering terms in the search box. For example, searching for the word "Charge" will bring up experiments in which the ice thermal storage tank is charged. The table of experiments also includes the duration of the experiment in minutes.Once an experiment is selected, specific sensor/actuator data points can be selected from the "Measurements" table on the right. These data can be filtered by subsystem (e.g., primary loop, secondary loop, Chiller1) and/or measurement type (e.g., pressure, flow, temperature). These data will then be shown in the plots at the top. The final section, "Process", contains the process data, which are shown by the subsystem. These data are not shown in the plots but can be downloaded by selecting the "Download CSV" button in the "Process" section. The Measurements Dashboard contains three sections. The "Date Range" section is used to select the time range of the data. The "All Measurements" section is used to select specific sensor/actuator data. As in the Experiments Dashboard, these data can be filtered by subsystem and/or measurement type. The scaled and raw values of the selected data are then plotted in the "Historical Data Plot" section. The "Download CSV" button underneath the plots will automatically download the selected data.

  14. n

    First ISCCP Regional Experiment (FIRE) Atlantic Stratocumulus Transition...

    • access.uat.earthdata.nasa.gov
    • s.cnmilf.com
    • +3more
    c
    Updated Jul 6, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). First ISCCP Regional Experiment (FIRE) Atlantic Stratocumulus Transition Experiment (ASTEX) ECMWF Mean Velocity Data [Dataset]. http://doi.org/10.5067/ASDC_DAAC/FIRE/0026
    Explore at:
    cAvailable download formats
    Dataset updated
    Jul 6, 2018
    Time period covered
    Jun 1, 1992 - Jun 28, 1992
    Area covered
    Description

    A special set of analysis products for the Atlantic Stratocumulus Transition Experiment (ASTEX) region during June 1-28, 1992 was prepared by Ernst Klinker and Tony Hollingsworth of the European Centre for Medium-range Forecasting (ECMWF), and reformatted by Chris Bretherton of Univ. of Washington. These analyses, or more correctly initializations and very short range forecasts using the ECMWF T213L30 operational model, incorporate routine observations from the global network and special soundings from ASTEX that were sent to ECMWFduring ASTEX via the GTS telecommunication system. About 650 special soundings were incorporated, including nearly all soundings from Santa Maria, Porto Santo, and the French ship Le Suroit, most of the soundings taken on the Valdivia and Malcolm Baldridge, and almost none of the soundings from the Oceanus. Surface reports from the research ships were also incorporated into the analyses after the first week of the experiment. Aircraft soundings were not included in the analyses. ECMWF has requested that anyone making use of this data set acknowledge them, and that those investigators publishing researchthat makes more than casual use of this data set contact Ernst Klinker or Tony Hollingsworth.The data have been decoded by Chris Bretherton into ASCII files, one for each horizontal field at a given level and base time. All data have the same horizontal resolution of 1.25 degrees in latitude and longitude and correspond to base (initialization) times of 00, 06, 12, or 18Z. Different fields have different lat/lon ranges and sets of available vertical levels, as tabulated below. Also, some fields are instantaneous (I) while others are accumulated (A) over the first 6 hours of a forecast initialized at the base time. This is tabulated in the 'time range' column below. Instantaneous fields are bestcompared with data at the base time, while accumulated fields are best compared with data three hours after the base time.Data Set Name ECMWF ECMWF Time Field Units field ID# range abbrev.----------- ------ ----- ----- ----- -----MEANW MVV 232 A Mean vertical velocity Pa/s(lat/lon range: 85W to 15E, 70N to 10N)(levels: 1010,1000,975,950,925,900,875,850,825,800,775,750,700,650,600,550,500,400,300,200,100 hPa)The ECMWF field abbreviation, ID#, field description and units aretaken directly from ECMWF Code Table 2, in case you ever need toconsult with ECMWF about this data set.

  15. n

    NASA Earthdata

    • earthdata.nasa.gov
    • search.dataone.org
    • +6more
    Updated Oct 30, 2008
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ORNL_CLOUD (2008). NASA Earthdata [Dataset]. http://doi.org/10.3334/ORNLDAAC/895
    Explore at:
    Dataset updated
    Oct 30, 2008
    Dataset authored and provided by
    ORNL_CLOUD
    Description

    The Atmospheric Tracer Transport Model Intercomparison Project (TransCom) was created to quantify and diagnose the uncertainty in inversion calculations of the global carbon budget that results from errors in simulated atmospheric transport, the choice of measured atmospheric carbon dioxide data used, and the inversion methodology employed. Under the third phase of TransCom (TransCom 3), surface-atmosphere CO2 fluxes were estimated from an intercomparison of 16 different atmospheric tracer transport models and model variants in order to assess the contribution of uncertainties in transport to the uncertainties in flux estimates for annual mean, seasonal cycle, and interannual inversions (referred to as Level 1, 2, and 3 experiments, respectively).

    This data set provides the model output and inversion results for the TransCom 3, Level I annual mean inversion experiments. Annual mean CO2 concentration data (GLOBALVIEW-CO2, 2000) were used to estimate CO2 sources. The annual average fluxes were estimated for the 1992-1996 period using each of the 16 transport models and a common inversion set-up (Gurney et al., 2002). Methodological choices for this control inversion were selected on the basis of knowledge gained from a wide range of sensitivity tests (Law et al., 2003). Gurney et al. (2003) present results from the control inversion for individual models as well as results from a number of sensitivity tests related to the specification of prior flux information.

    Additional information about the experimental protocol and results is provided in the companion files and the TransCom project web site (http://www.purdue.edu/transcom/index.php).

    The results of the Level 1 experiments presented here are grouped into two broad categories: forward simulation fields and response functions (model output) and estimated fluxes (inversion results).

  16. Experimental Data for Question Classification

    • kaggle.com
    zip
    Updated Jan 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    JunYu (2019). Experimental Data for Question Classification [Dataset]. https://www.kaggle.com/owen1226/textsdata
    Explore at:
    zip(127653 bytes)Available download formats
    Dataset updated
    Jan 9, 2019
    Authors
    JunYu
    Description

    Context

    This data collection contains all the data used in our learning question classification experiments, which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features.

    Content

    ABBR - 'abbreviation': expression abbreviated, etc. DESC - 'description and abstract concepts': manner of an action, description of sth. etc. ENTY - 'entities': animals, colors, events, food, etc. HUM - 'human beings': a group or organization of persons, an individual, etc. LOC - 'locations': cities, countries, etc. NUM - 'numeric values': postcodes, dates, speed,temperature, etc

    Acknowledgements

    https://cogcomp.seas.upenn.edu/Data/QA/QC/ https://github.com/Tony607/Keras-Text-Transfer-Learning/blob/master/README.md

  17. d

    Pallid sturgeon free embryo drift and dispersal experiment data from the...

    • datasets.ai
    • data.usgs.gov
    • +1more
    55
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2023). Pallid sturgeon free embryo drift and dispersal experiment data from the Upper Missouri River, Montana and North Dakota, 2019: Water temperature and discharge data [Dataset]. https://datasets.ai/datasets/pallid-sturgeon-free-embryo-drift-and-dispersal-experiment-data-from-the-upper-missouri-ri-1195e
    Explore at:
    55Available download formats
    Dataset updated
    Jun 1, 2023
    Dataset authored and provided by
    Department of the Interior
    Area covered
    Missouri River, North Dakota, Montana
    Description

    In 2019, an experimental release of nearly 1.0 million free embryos of federally endangered pallid sturgeon (Scaphirhynchus albus) was conducted in the Missouri River of eastern Montana and western North Dakota. Dispersal of the free embryos and survival of the benthic larvae was assessed from 20190701 to 20190909 through 150 miles of the river. The data sets contain mean daily discharge and hourly water temperature data for the Missouri River, and mean daily discharge data for the Yellowstone River. Dates of the data extend from 20190701 to 20190909. Discharge data for the Missouri River were obtained from USGS gage 06177000 near Wolf Point, Montana. Discharge data for the Yellowstone River were obtained from USGS gage 06329500 near Sidney, Montana. Water temperature data for the Missouri River were recorded by water temperature loggers positioned near Poplar, Montana, Culbertson, Montana, the Montana-North Dakota state border, and downstream from the Yellowstone River confluence.

  18. Z

    Data from: Dataset of Experimental Investigations of a Full-Scale Louvre...

    • data.niaid.nih.gov
    Updated Jan 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bugenings, Laura Annabelle (2025). Dataset of Experimental Investigations of a Full-Scale Louvre Element [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14614813
    Explore at:
    Dataset updated
    Jan 27, 2025
    Dataset provided by
    Aarhus University
    Authors
    Bugenings, Laura Annabelle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the raw data and the processed results of the experimental investigations of a full-scale louvre element conducted in September 2024 in the laboratories of the Department of Civil and Architectural Engineering, Aarhus University.

    The dataset is structured as follows:

    Dataset

    ¦ Result summary.xlsx

    ¦

    +---WHS 3ACH

    ¦ WHS_3ACH_CS_velocity.csv

    ¦ WHS_3ACH_DL1_temperature.csv

    ¦ WHS_3ACH_DL2_temperature.csv

    ¦ WHS_3ACH_DL3_temperature.csv

    ¦ WHS_3ACH_DL4_temperature.csv

    ¦ WHS_3ACH_flow_meter.csv

    ¦ WHS_3ACH_VIVO_velocity.csv

    ¦

    +---WHS 5ACH

    ¦ WHS_5ACH_CS_velocity.csv

    ¦ WHS_5ACH_DL1_temperature.csv

    ¦ WHS_5ACH_DL2_temperature.csv

    ¦ WHS_5ACH_DL3_temperature.csv

    ¦ WHS_5ACH_DL4_temperature.csv

    ¦ WHS_5ACH_flow_meter.csv

    ¦ WHS_5ACH_VIVO_velocity.csv

    ¦

    +---WHS 7ACH

    ¦ WHS_7ACH_CS_velocity.csv

    ¦ WHS_7ACH_DL1_temperature.csv

    ¦ WHS_7ACH_DL2_temperature.csv

    ¦ WHS_7ACH_DL3_temperature.csv

    ¦ WHS_7ACH_DL4_temperature.csv

    ¦ WHS_7ACH_flow_meter.csv

    ¦ WHS_7ACH_VIVO_velocity.csv

    ¦

    +---WOHS 3ACH

    ¦ WOHS_3ACH_CS_velocity.csv

    ¦ WOHS_3ACH_DL1_temperature.csv

    ¦ WOHS_3ACH_DL2_temperature.csv

    ¦ WOHS_3ACH_DL3_temperature.csv

    ¦ WOHS_3ACH_DL4_temperature.csv

    ¦ WOHS_3ACH_flow_meter.csv

    ¦ WOHS_3ACH_VIVO_velocity.csv

    ¦

    +---WOHS 5ACH

    ¦ WOHS_5ACH_CS_velocity.csv

    ¦ WOHS_5ACH_DL1_temperature.csv

    ¦ WOHS_5ACH_DL2_temperature.csv

    ¦ WOHS_5ACH_DL3_temperature.csv

    ¦ WOHS_5ACH_DL4_temperature.csv

    ¦ WOHS_5ACH_flow_meter.csv

    ¦ WOHS_5ACH_VIVO_velocity.csv

    ¦

    +---WOHS 7ACH

      WOHS_7ACH_CS_velocity.csv
    
      WOHS_7ACH_DL1_temperature.csv
    
      WOHS_7ACH_DL2_temperature.csv
    
      WOHS_7ACH_DL3_temperature.csv
    
      WOHS_7ACH_DL4_temperature.csv
    
      WOHS_7ACH_flow_meter.csv
    
      WOHS_7ACH_VIVO_velocity.csv
    

    The result summary contains 8 sheets with the following information:

    Overview:

    Measurement cases with target flow rate and heat source presence.

    The date of the experiment and the time period in which the data was averaged for the processed results.

    The allocation of the thermocoulples to the datalogger.

    The sensor location on the stands (temperature and velocity).

    The sensor location on the surfaces (temperature)

    The sensors used for the ice point references.

    The sensors used in the anteroom.

    The seonsors used for the heat source.

    Graphical representation of sensor location and room.

    Calibration curves:

    Calibration curves for all thermocouples according to datalogger.

    WOHS/WHS:

    Mean temperature according to sensor, datalogger, location, height.

    Standard deviation according to sensor, datalogger, location, height.

    Mean velocity according to sensor, datalogger, location, height.

    Standard deviation according to sensor, datalogger, location, height.

    Turbulence intensity.

    u_u0: mean velocity at sensor/mean velocity in flow meter.

    Mean temperature at flow meter.

    Mean velocity at flow meter.

    Mean flow rate at flow meter.

    Files with the ending _temperature.csv contain the following:

    Column 1 (datetime): date and time in ISO8601 format (YYYY-MM-DDThh:mm:ssZ)

    Column 2 (sensorname): temperature at sensor in °C

    Files with the ending _flow_meter contain the following:

    Column 1 (datetime): date and time in ISO8601 format (YYYY-MM-DDThh:mm:ssZ)

    Column 2 (velocity): velocity at flow meter in m/s

    Column 3 (exhaust_temperature): temperature at flow meter in °C

    Column 4 (flow_rate): flow rate at flow meter in m3/h

    Files with the ending _CS_velocity.csv (CS stands for the comfort sense sensors) contain the following:

    Column 1 (datetime): date and time in ISO8601 format (YYYY-MM-DDThh:mm:ssZ)

    Column 2-17 (sensorename): velocity at sensor in m/s

    Files with the ending _VIVO_velocity.csv contain the following:

    Column 1 (datetime): date and time in ISO8601 format (YYYY-MM-DDThh:mm:ssZ)

    Column 2-7 (sensorename): velocity at sensor in m/s

    Note for that the VIVO system each sensor logged their result individually which means measurements are not at the same time stamp. This leads to NA entries.

  19. S

    Eye movement data set of Internet fraud, Shandong Normal University,...

    • scidb.cn
    Updated Apr 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shang Yuxi; Li Hugo; Su Wei; Lin Jiayu; Zhang Kaihua (2025). Eye movement data set of Internet fraud, Shandong Normal University, 2022-2025 [Dataset]. http://doi.org/10.57760/sciencedb.psych.00598
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 17, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Shang Yuxi; Li Hugo; Su Wei; Lin Jiayu; Zhang Kaihua
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Area covered
    Shandong
    Description

    Behavioral data were collected after the subjects did eye movement experiment. The experiment was divided into four times, Latin square balance was carried out, and behavioral data was also divided into four times.Eye movement experiments were conducted using Eyelink 1000 (SR Research, Ontario, Canada) eye movement recorder with sampling frequency of 1000 Hz and spatial resolution of 0.2 degree Root Mean Square. The screen resolution of the tested machine is 1024 × 768 pixels. The distance between the subjects' eyes and the screen was 60cm. Experiment Builder 2.2.1 was used to program the experimental flow.The experiment adopted a three-factor mixed experimental design: 2 (subjects category: victims of Internet fraud, those who have not been subjected to Internet fraud) ×2 (time pressure: with time pressure, without time pressure) ×2 (cognitive load: with cognitive load, without cognitive load). The independent variables were subject category, time pressure and cognitive load, in which subject category was inter-subject variable, time pressure and cognitive load were intra-subject variables, and dependent variable was eye movement index.The experiment includes two parts: pre-experiment and formal experiment. The purpose of the pre-experiment is to determine the time required to complete the risk decision option, and to calculate the average time and standard deviation to determine the time setpoint of the time limit under the time pressure condition. The formal experiment requires the subjects to complete the risk decision task. Each subject was required to complete four levels: no time pressure with no cognitive load, no time pressure with cognitive load, no time pressure with cognitive load, and time pressure with cognitive load. The four levels were divided into four times, each time with an interval of 1 week. Each level contained 4 tasks (that is, 4 material presentation sequences), and each task consisted of 32 experiments, divided into two blocks, with rest arranged between the tasks and between the blocks.Without time pressure, participants were asked to press a button as quickly as possible to select their preferred option in the risk decision task. Under time pressure, the experimental process was similar to that under no time pressure, except that the picture presentation time of the risk decision task was changed from unlimited time to -1 standard deviation in the average response time of the subjects in the pre-experiment, and the subjects were informed in the instruction that the picture presentation time was limited. Without cognitive load, the subjects were asked to perform only the risk decision making task. Under cognitive load, the subjects were asked to perform dual tasks of risk decision making and cognitive load. The cognitive load task requires the subjects to remember the five numbers appearing on the screen before the decision task starts. After the decision task, the subjects will hear a sound prompt. If the sound frequency is low, the subjects are asked to report the three smallest numbers in memory. If the sound frequency is high, report the maximum 3 numbers. The order of risk decision-making tasks at four levels was balanced in Latin square between subjects.The process of producing eye movement data set: (1) Each subject is familiar with the environment after entering the laboratory, and then performs eye calibration to ensure that the instrument accurately records the subject's eye movement trajectory. Nine-point correction was used in the experiment. (2) The instruction was presented to the subjects: "Here is a decision making task, and the decision scenario is to buy a lottery ticket. The actual results of each lottery purchase are automatically loaded into your virtual account, and the total amount in your account is related to how much you get paid for the task. Two options A and B will appear on the screen, containing the amount of money and the probability of losing during the gambling process, respectively. If you select A, press F. If you select B, press J. There is no right or wrong answer. There is no time limit for choosing. Please answer carefully." (3) After ensuring that the subject understands the instruction, the instrument is calibrated, and only the eye movement track of the right eye is recorded. (4) After calibration, practice. (5) In order to ensure the validity of experimental data, calibration is carried out twice before each experiment, and the maximum allowable deviation of calibration is 0.2°. (6) Start the formal experiment. The main test machine monitors the subject's gaze in real time. In the experiment, a plus sign appeared in the middle of the screen before each stimulus was presented. Subjects were asked to press the keyboard space bar while looking at the plus sign to present the stimulus option, and the next stimulus option was automatically presented after the selection (under time pressure conditions, if the subject did not make a choice at the end of the time limit, the next stimulus option was also automatically presented). The whole experiment lasted about 40 minutes.Key metrics of data collected from eye tracking experiments include reaction time, mean gaze time, total gaze number, mean saccadic distance, mean gaze time, saccadic return, and mean pupil size. SPSS 26 was used for subsequent data analysis, including descriptive statistics of behavioral data and eye movement data, independent sample T-test, and three-factor repeated measure ANOVA. Finally, the correlation between behavioral data and eye movement data was calculated for each group. The significance level α was set to 0.05 for all the above statistical analyses.AI为你提供母语级高精翻译免费体验

  20. u

    Data from USDA ARS Central Plains Experimental Range (CPER) near Nunn, CO:...

    • agdatacommons.nal.usda.gov
    • catalog.data.gov
    application/csv
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Justin D Derner; Mary Ashby; David J. Augustine; Melissa Johnston; Tamarah (Tami) Jorns; Matt Mortenson; Jake Thomas; Jeff Thomas (2025). Data from USDA ARS Central Plains Experimental Range (CPER) near Nunn, CO: Cattle weight gains managed with light, moderate and heavy grazing intensities [Dataset]. http://doi.org/10.15482/USDA.ADC/1528520
    Explore at:
    application/csvAvailable download formats
    Dataset updated
    Nov 21, 2025
    Dataset provided by
    Ag Data Commons
    Authors
    Justin D Derner; Mary Ashby; David J. Augustine; Melissa Johnston; Tamarah (Tami) Jorns; Matt Mortenson; Jake Thomas; Jeff Thomas
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Area covered
    Colorado, Nunn
    Description

    The USDA-Agricultural Research Service Central Plains Experimental Range (CPER) is a Long-Term Agroecosystem Research (LTAR) network site located ~20 km northeast of Nunn, in north-central Colorado, USA. In 1939, scientists established the Long-term Grazing Intensity study (LTGI) with four replications of light, moderate, and heavy grazing. Each replication had three 129.5 ha pastures with the grazing intensity treatment randomly assigned. Today, one replication remains. Light grazing occurs in pasture 23W (9.3 Animal Unit Days (AUD)/ha, targeted for 20% utilization of peak growing-season biomass), moderate grazing in pasture 15E (12.5 AUD/ha, 40% utilization), and heavy grazing in pasture 23E (18.6 AUD/ha, 60% utilization). British- and continental-breed yearling cattle graze the pastures season-long from mid-May to October except when forage limitations shorten the grazing season. Individual raw data on cattle entry and exit weights, as well as weights every 28-days during the grazing season are available from 2000 to 2019. Cattle entry and exit weights are included in this dataset. Weight outliers (± 2 SD) are flagged for calculating summary statistics or performing statistical analysis. Resources in this dataset:Resource Title: Data Dictionary for LTGI Cattle weights on CPER (2000-2019). File Name: LTGI_2000-2019_data_dictionary.csvResource Description: Data dictionary for data from USDA ARS Central Plains Experimental Range (CPER) near Nunn, CO cattle weight gains managed with light, moderate and heavy grazing intensities Resource Title: LTGI Cattle weights on CPER (2000-2019). File Name: LTGI_2000-2019_all_weights_published.csvResource Description: Data from USDA ARS Central Plains Experimental Range (CPER) near Nunn, CO cattle weight gains managed with light, moderate and heavy grazing intensities

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Segev, Ronen; Schneidman, Elad; Tkačik, Gašper; Ghosh, Anandamohan (2014). Stimuli used in the experiment (see main text for the definition of the statistics ). [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001226171

Stimuli used in the experiment (see main text for the definition of the statistics ).

Explore at:
Dataset updated
Jan 21, 2014
Authors
Segev, Ronen; Schneidman, Elad; Tkačik, Gašper; Ghosh, Anandamohan
Description

The shorthand symbol for the stimulus starts with the C/S/K (for contrast, skew, kurtosis) and is followed by −,−−,+,++ (small magnitude and negative, large magnitude and negative, small magnitude and positive, large magnitude and positive); therefore, C+,C++,S−−,S−,S+,S++,K−−,K−,K+. Parameters in the table denoted in bold were varied in each of the three stimulus categories.

Search
Clear search
Close search
Google apps
Main menu