4 datasets found
  1. Human and Rat Behavioral Data in Auditory Parametric Working Memory Tasks

    • figshare.com
    txt
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Athena Akrami; Carlos D. Brody (2023). Human and Rat Behavioral Data in Auditory Parametric Working Memory Tasks [Dataset]. http://doi.org/10.6084/m9.figshare.12213671.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Athena Akrami; Carlos D. Brody
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Large behavioral dataset (> 2million trials) from the first ever rodent Auditory Parametric Working Memory task, initially presented in Akrami et al. 2018 [1]. The dataset contains behavior for 19 rats across a combined 2,540,006 trials, and 11 human subjects across 9,507 trials. README for human auditory behavioral dataThe accompanying file human_auditory.csv contains all behavioral data from the human auditory task, first presented in Akrami et al. 2018 [1]. The data set contains behavior for 11 human subjects across a combined 9507 trials. Data was collected from each subject within a single session. Relevant portions from the Methods section of the paper ("Human subjects (auditory)" and "Human auditory behavior") are reproduced below:"11 human subjects (8 males and 3 females, aged 22–40) were tested and all gave their informed consent. Participants were paid to be part of the study and were naive to the main conclusions of the study. The consent procedure and the rest of the protocol were approved by the Princeton University Institutional Review Board.""In this experiment, subjects received, in each trial, a pair of sounds played from ear-surrounding noise-cancelling headphones (brand 233621-H501). The subject self-initiated each trial by pressing the space bar on the keyboard. The first sound was then presented together with a green square on the left side of a computer monitor in front of the subject. This was followed by a delay period, indicated by ‘WAIT!’ on the screen, then the second sound was presented together with a red square on the right side of the screen. At the end of the second stimulus and after the go cue, subjects were required to compare the two sounds and decide which one was louder, then indicate their choice by pressing the ‘k’ key with their right hand (second was louder) or the ‘s’ key with their left hand (first was louder). Written feedback about the correctness of their response was provided on the screen, for individual trials as well as the average performance updated every ten trials."Data description, by column:1) subject_id: The ID of the human subject (11 total subjects, #1-11)2) trial: The trial number3) stim_pair: The ID of the stimulus pair used in the trial. There are 10 different pairs used. Pairs 1-5 have stimulus A > B, and so are rewarded on the right side; pairs 6-10 have stimulus A < B, and so are rewarded on the left side. The values of stimuli A and B corresponding to each pair can be inferred from the "s_a" and "s_b" columns, and are also written explicitly below (see Extended Data Figures 1E and 3B [1])4) s_a: The loudness of stimulus A (pink noise), in decibels (dB)5) s_b: The loudness of stimulus B (pink noise), in decibels (dB)6) choice: The choice made by the subject, where Left=0 and Right=17) correct_side: The rewarded (correct) side, where Left=0 and Right=18) reward: If the subject was rewarded (made the correct choice), where No Reward=0 and Reward=19) delay: The delay, in seconds, between the presentation of stimuli A and B (either 2, 4, 6, or 8 seconds)Stimulus Pairs:1: (62.7, 60); 2: (65.4, 62.7); 3: (68.1, 65.4); 4: (70.8, 68.1); 5: (73.5, 70.8);6: (60, 62.7); 7: (62.7, 65.4); 8: (65.4, 68.1); 9: (68.1, 70.8); 10: (70.8, 73.5)README for rat behavioral dataThe accompanying file rat_behavior.csv contains all behavioral data from the rat auditory task, first presented in Akrami et al. 2018 [1]. The data set contains behavior for 19 rats across a combined 2540006 trials. Data from rats trained on "delay" intervals of longer than 8 seconds are omitted, as this is an area of continuing research. Relevant portions from the Methods section of the paper ("Rat subjects" and "Rat behavior") are reproduced below:"A total of 33 male Long–Evans rats (Rattus norvegicus) between the ages of 6 and 24 months were used for this study. Of these, 25 rats were used for behavioural assessments ... Animal use procedures were approved by the Princeton University Institutional Animal Care and Use Committee and carried out in accordance with National Institutes of Health standards."Data description, by column:1) subject_id: The name of the rat (19 total subjects)2) session: The session number, where 1 is the first session of training3) trial: The trial number within a session, where 1 is the first trial 4) s_a: The loudness of stimulus A (pink noise) in decibels (dB) (NaN during training_stage 1)5) s_b: The loudness of stimulus B (pink noise) in decibels (dB) (NaN during training_stage 1)6) choice: The choice made by the rat, where Left=0 and Right=1 (and Mistrial=NaN)7) correct_side: The correct side, where Left=0 and Right=18) hit: If the rat made the correct choice, where Incorrect=0 and Correct=1 (and Mistrial=NaN)9) delay: The duration of the delay between the end of Tone A and the start of Tone B, in seconds. See the example trial timeline below for more information.10) training_stage: Index indicating the stage of training (1-4, see below)Timeline of an example trial during the final stage (4) of training:1) A light in the center port indicates that a new trial can be initiated, at which point the rat can nose-poke in the center port2) After the start of the nose-poke, there is a 0.25 sec delay before the start of Tone A3) Tone A plays for 0.4 sec4) There is a variable delay (recorded in the "delay" column in the CSV file)5) Tone B plays for 0.4 sec6) After the end of Tone B, there is a 0.25 sec delay before the Go cue is played7) A Go cue plays for 0.2 secThe rat is free to withdraw from the center port and make a choice at the start of the Go cue.The total duration of the center nose-poke on each trial is thus 0.25 + 0.4 + "delay" + 0.4 + 0.25, or 1.3 + "delay" seconds.Any break of center nose-poke during this time (steps 2-6) results in a Mistrial.During training_stage 1 (ShapingStage), there is no tone played.training_stage descriptions:1) ShapingStage: chasing lights, no tones, building center nose-poke, s_a and s_b are NaN2) ImmediateReward: s_a and s_b are played, and independent of the animals's choice reward is delivered according to the rule (s_a > s_b -> Right, s_b > s_a -> Left), and animal can immediately collect the reward from the other side, after a wrong choice.3) DelayedReward: the standard task, except the wrong side will still deliver a reward after a 1-5 sec delay4) NoReward: the standard taskNotes:- Two types of trials were omitted from the dataset due to anomalies: (1) 14 trials were omitted for having anomalous stimulus values, and (2) 26 trials were omitted for having inconsistent correct_side values. These omissions are reflected in the data set as gaps in the "trial" count.- There are 256691 trials that are Mistrials, where both "choice" and "hit" are NaN. These are trials where the rat did not complete the trial for various reasons (e.g. it broke center fixation early). Also, at some point after session 100 (differs for each rat), mistrials are no longer included in the data.- For some rats, there are a few sessions late in training where training_stage reverts back to 1. This is for rats that suddenly started to do poorly and had their training reverted back to initial shaping stages, temporarily.[1] Akrami, A., Kopec, C.D., Diamond, M.E. and Brody, C.D., 2018. Posterior parietal cortex represents sensory history and mediates its effects on behaviour. Nature, 554(7692), pp.368-372.

  2. n

    Data from: Approach-induced biases in human information sampling

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jan 5, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan (2017). Approach-induced biases in human information sampling [Dataset]. http://doi.org/10.5061/dryad.nb41c
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 5, 2017
    Dataset provided by
    University College London
    Authors
    Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    IInformation sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.

  3. P

    EgoSchema Dataset

    • paperswithcode.com
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). EgoSchema Dataset [Dataset]. https://paperswithcode.com/dataset/egoschema
    Explore at:
    Dataset updated
    Nov 21, 2024
    Description

    EgoSchema is very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

  4. Human social learning biases in virtual environments

    • figshare.com
    zip
    Updated Feb 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carrie Easter (2022). Human social learning biases in virtual environments [Dataset]. http://doi.org/10.6084/m9.figshare.19196600.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 18, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Carrie Easter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is from a study on human social learning biases conducted by C.Easter (University of Leeds) as part of her PhD thesis.This data was collected using a novel research tool, "Virtual Environments for Research into Social Evolution" (VERSE), which uses gaming technology (Unity3D) to study human social learning behaviour within realistic, open world environments. VERSE aims to tackle some of the limitations of previous lab-based experiments, which are restricted by the use of abstract tasks, unrealistic social information sources and extremely localised spatial scales.In this study, 143 undergraduate students from the University of Leeds were asked to solve a series of novel tasks within a set of virtual environments. Participants were divided into two groups:-- "Same Rewards": Rewards equal in the environment.-- "Different Rewards": Rewards vary in the environment. One demonstrator in each demonstrator condition always displays a more profitable option than the alternative demonstrator.The tasks were as follows: "Container" task, deposit a token into one of two containers over ten rounds. "Route Choice" task, find the shortest route to a fixed end point. "Foraging" task, navigate a large, open environment to collect food items. For each task, participants were subjected to 6 demonstrator conditions:-- "Asocial": No demonstrators present, participant plays alone.-- "SocVsAsoc": One demonstrator present, all other options are undemonstrated.-- "Dominance": Two demonstrators present, a dominant AI and a subordinate AI, distinguished by physical appearance and behavioural differences. Dominant AI displays one option (in the 'Different Rewards group, always the more profitable option) while one AI displays an alternative.-- "Frequency": Four demonstrators present, three AIs display one option (in the 'Different Rewards group, always the more profitable option) while one AI displays an alternative.-- "Gender": Two demonstrators present, a male and a female. The male AI displays one option (in the 'Different Rewards group, always the more profitable option) while the female AI displays an alternative.-- "Size": Two demonstrators present, a large AI and a small AI. The large AI displays one option (in the 'Different Rewards group, always the more profitable option) while the small AI displays an alternative.The data is arranged as follows.In the root of the "HumanLearningVERSE" folder:Three R code files:-- "_Dataset_Generation_Code": Generates the 'Diff_' and 'Same_' datasets in the root folder.-- "_GLM_Analysis_Code": Conducts the glm analyses in the main paper.-- "_Graphs_Additional_Analyses_Code": Creates the graphs in the main manuscript and in the supplementary material. Also conducts some additional analyses, e.g. correlations in social information usage.Datasets:-- A dataset called "ParticipantData", which gives each participant's answers to a series of questions asked after the study. These answers are used as individual variables for each participant during the analysis. These include: gender, age, a series of answers to Bryant and Smith’s (2001), how often they play video games and how easy they found it to follow the instructions given / play the game during the experiment.-- A series of datasets beginning with "Same_" and "Diff_". These datasets give the proportion of times each demonstrator or no demonstrator were copied by each participant, during each demonstrator condition, for each task. Files are labelled with the task type (Container, Route, Foraging) and the reward group (Different Rewards, Same Rewards) the participant was placed in. Files ending in "ILV" are the main datasets, giving a summary of all the choices made by each participant. Files ending in "InitialChoice" give only the initial choices made by each participant, at the beginning of each demonstrator condition.The HumanLearningVERSE folder contains two additional folders, "DiffRewards" and "SameRewards", which contain the raw data collected from VERSE during the experiment. "DiffRewards" contains data for participants in the Different Rewards group and "SameRewards" for the Same Rewards group. In these folders are a series of folders, named with the participant's reference number (these numbers match the data in the ParticipantData csv file). In each participant's folders are the data for each of the three tasks, again placed into their individual folders. The name of each data file is descriptive and gives details of the replicate in question like so: "Ref_participantReferenceNumber_DataTaskName_NumberOfGameLevel/Replicate_SceneName(IncludingRewardGroupAndDemonstratorCondition)_DataType.csv"For the Container task, there are two types of data per participant:-- "InteractionsData": All interactions with 'interactable objects' including which character interacted (participant = "player", demonstrators are labelled by their names, e.g. "AI (Large)", which object they interacted with, and when it occurred. 'ContainerY' and 'ContainerB' refer to the yellow and blue containers.-- "FoodCollectionScore": The final value for the for the player's food collection score and the potential amount they could have collected.For the Foraging task, four data types are collected:-- "FoodPatchVisits": Reports which character visited which food patch and when.-- "PlayerFoodEatenData": Reports which food items were collected by the player and when, plus the nutritional value of each food item and a cumulative nutrition score. -- "FoodCollected": The final value for the for the player's food collection score and the potential amount they could have collected.-- A "PositionData" dataset for each character: The x,y,z coordinates for a particular character each timestep, for . The character is stated in the filename.For the Route Choice task, two data types were collected:-- A "PositionData" dataset for each character: The x,y,z coordinates for a particular character each timestep, for . The character is stated in the filename.-- "RemainingEnergy": The final energy value of the player at the end of the 'level'/replicate.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Athena Akrami; Carlos D. Brody (2023). Human and Rat Behavioral Data in Auditory Parametric Working Memory Tasks [Dataset]. http://doi.org/10.6084/m9.figshare.12213671.v1
Organization logo

Human and Rat Behavioral Data in Auditory Parametric Working Memory Tasks

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
txtAvailable download formats
Dataset updated
May 31, 2023
Dataset provided by
Figsharehttp://figshare.com/
Authors
Athena Akrami; Carlos D. Brody
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Large behavioral dataset (> 2million trials) from the first ever rodent Auditory Parametric Working Memory task, initially presented in Akrami et al. 2018 [1]. The dataset contains behavior for 19 rats across a combined 2,540,006 trials, and 11 human subjects across 9,507 trials. README for human auditory behavioral dataThe accompanying file human_auditory.csv contains all behavioral data from the human auditory task, first presented in Akrami et al. 2018 [1]. The data set contains behavior for 11 human subjects across a combined 9507 trials. Data was collected from each subject within a single session. Relevant portions from the Methods section of the paper ("Human subjects (auditory)" and "Human auditory behavior") are reproduced below:"11 human subjects (8 males and 3 females, aged 22–40) were tested and all gave their informed consent. Participants were paid to be part of the study and were naive to the main conclusions of the study. The consent procedure and the rest of the protocol were approved by the Princeton University Institutional Review Board.""In this experiment, subjects received, in each trial, a pair of sounds played from ear-surrounding noise-cancelling headphones (brand 233621-H501). The subject self-initiated each trial by pressing the space bar on the keyboard. The first sound was then presented together with a green square on the left side of a computer monitor in front of the subject. This was followed by a delay period, indicated by ‘WAIT!’ on the screen, then the second sound was presented together with a red square on the right side of the screen. At the end of the second stimulus and after the go cue, subjects were required to compare the two sounds and decide which one was louder, then indicate their choice by pressing the ‘k’ key with their right hand (second was louder) or the ‘s’ key with their left hand (first was louder). Written feedback about the correctness of their response was provided on the screen, for individual trials as well as the average performance updated every ten trials."Data description, by column:1) subject_id: The ID of the human subject (11 total subjects, #1-11)2) trial: The trial number3) stim_pair: The ID of the stimulus pair used in the trial. There are 10 different pairs used. Pairs 1-5 have stimulus A > B, and so are rewarded on the right side; pairs 6-10 have stimulus A < B, and so are rewarded on the left side. The values of stimuli A and B corresponding to each pair can be inferred from the "s_a" and "s_b" columns, and are also written explicitly below (see Extended Data Figures 1E and 3B [1])4) s_a: The loudness of stimulus A (pink noise), in decibels (dB)5) s_b: The loudness of stimulus B (pink noise), in decibels (dB)6) choice: The choice made by the subject, where Left=0 and Right=17) correct_side: The rewarded (correct) side, where Left=0 and Right=18) reward: If the subject was rewarded (made the correct choice), where No Reward=0 and Reward=19) delay: The delay, in seconds, between the presentation of stimuli A and B (either 2, 4, 6, or 8 seconds)Stimulus Pairs:1: (62.7, 60); 2: (65.4, 62.7); 3: (68.1, 65.4); 4: (70.8, 68.1); 5: (73.5, 70.8);6: (60, 62.7); 7: (62.7, 65.4); 8: (65.4, 68.1); 9: (68.1, 70.8); 10: (70.8, 73.5)README for rat behavioral dataThe accompanying file rat_behavior.csv contains all behavioral data from the rat auditory task, first presented in Akrami et al. 2018 [1]. The data set contains behavior for 19 rats across a combined 2540006 trials. Data from rats trained on "delay" intervals of longer than 8 seconds are omitted, as this is an area of continuing research. Relevant portions from the Methods section of the paper ("Rat subjects" and "Rat behavior") are reproduced below:"A total of 33 male Long–Evans rats (Rattus norvegicus) between the ages of 6 and 24 months were used for this study. Of these, 25 rats were used for behavioural assessments ... Animal use procedures were approved by the Princeton University Institutional Animal Care and Use Committee and carried out in accordance with National Institutes of Health standards."Data description, by column:1) subject_id: The name of the rat (19 total subjects)2) session: The session number, where 1 is the first session of training3) trial: The trial number within a session, where 1 is the first trial 4) s_a: The loudness of stimulus A (pink noise) in decibels (dB) (NaN during training_stage 1)5) s_b: The loudness of stimulus B (pink noise) in decibels (dB) (NaN during training_stage 1)6) choice: The choice made by the rat, where Left=0 and Right=1 (and Mistrial=NaN)7) correct_side: The correct side, where Left=0 and Right=18) hit: If the rat made the correct choice, where Incorrect=0 and Correct=1 (and Mistrial=NaN)9) delay: The duration of the delay between the end of Tone A and the start of Tone B, in seconds. See the example trial timeline below for more information.10) training_stage: Index indicating the stage of training (1-4, see below)Timeline of an example trial during the final stage (4) of training:1) A light in the center port indicates that a new trial can be initiated, at which point the rat can nose-poke in the center port2) After the start of the nose-poke, there is a 0.25 sec delay before the start of Tone A3) Tone A plays for 0.4 sec4) There is a variable delay (recorded in the "delay" column in the CSV file)5) Tone B plays for 0.4 sec6) After the end of Tone B, there is a 0.25 sec delay before the Go cue is played7) A Go cue plays for 0.2 secThe rat is free to withdraw from the center port and make a choice at the start of the Go cue.The total duration of the center nose-poke on each trial is thus 0.25 + 0.4 + "delay" + 0.4 + 0.25, or 1.3 + "delay" seconds.Any break of center nose-poke during this time (steps 2-6) results in a Mistrial.During training_stage 1 (ShapingStage), there is no tone played.training_stage descriptions:1) ShapingStage: chasing lights, no tones, building center nose-poke, s_a and s_b are NaN2) ImmediateReward: s_a and s_b are played, and independent of the animals's choice reward is delivered according to the rule (s_a > s_b -> Right, s_b > s_a -> Left), and animal can immediately collect the reward from the other side, after a wrong choice.3) DelayedReward: the standard task, except the wrong side will still deliver a reward after a 1-5 sec delay4) NoReward: the standard taskNotes:- Two types of trials were omitted from the dataset due to anomalies: (1) 14 trials were omitted for having anomalous stimulus values, and (2) 26 trials were omitted for having inconsistent correct_side values. These omissions are reflected in the data set as gaps in the "trial" count.- There are 256691 trials that are Mistrials, where both "choice" and "hit" are NaN. These are trials where the rat did not complete the trial for various reasons (e.g. it broke center fixation early). Also, at some point after session 100 (differs for each rat), mistrials are no longer included in the data.- For some rats, there are a few sessions late in training where training_stage reverts back to 1. This is for rats that suddenly started to do poorly and had their training reverted back to initial shaping stages, temporarily.[1] Akrami, A., Kopec, C.D., Diamond, M.E. and Brody, C.D., 2018. Posterior parietal cortex represents sensory history and mediates its effects on behaviour. Nature, 554(7692), pp.368-372.

Search
Clear search
Close search
Google apps
Main menu