72 datasets found
  1. d

    Data from: The influence of active video game play upon physical activity...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Jun 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: The influence of active video game play upon physical activity and screen-based activities in sedentary children [Dataset]. https://catalog.data.gov/dataset/data-from-the-influence-of-active-video-game-play-upon-physical-activity-and-screen-based--33694
    Explore at:
    Dataset updated
    Jun 5, 2025
    Dataset provided by
    Agricultural Research Service
    Description

    Includes 24 hour recall data that children were instructed to fill-out describing the previous day’s activities at baseline, weeks 2 and 4 of the intervention, after the intervention (6 weeks), and after washout (10 weeks). Includes accelerometer data using an ActiGraph to assess usual physical and sedentary activity at baseline, 6 weeks, and 10 weeks. Includes demographic data such as weight, height, gender, race, ethnicity, and birth year. Includes relative reinforcing value data showing how children rated how much they would want to perform both physical and sedentary activities on a scale of 1-10 at baseline, week 6, and week 10. Includes questionnaire data regarding exercise self-efficacy using the Children’s Self-Perceptions of Adequacy in and Predilection of Physical Activity Scale (CSAPPA), motivation for physical activity using the Behavioral Regulations in Exercise Questionnaire, 2nd edition (BREQ-2), motivation for active video games using modified questions from the BREQ-2 so that the question refers to motivation towards active video games rather than physical activity, motivation for sedentary video games using modified questions from the BREQ-2 so that the question refers to motivation towards sedentary video games behavior rather than physical activity, and physical activity-related parenting behaviors using The Activity Support Scale for Multiple Groups (ACTS-MG). Resources in this dataset:Resource Title: 24 Hour Recall Data. File Name: 24 hour recalldata.xlsxResource Description: Children were instructed to fill out questions describing the previous day's activities at baseline, week 2, and week 4 of the intervention, after the intervention (6 weeks), and after washout (10 weeks).Resource Title: Actigraph activity data. File Name: actigraph activity data.xlsxResource Description: Accelerometer data using an ActiGraph to assess usual physical and sedentary activity at baseline, 6 weeks, and 10 weeks.Resource Title: Liking Data. File Name: liking data.xlsxResource Description: Relative reinforcing value data showing how children rated how much they would want to perform both physical and sedentary activities on a scale of 1-10 at baseline, week 6, and week 10.Resource Title: Demographics. File Name: Demographics (Birthdate-Year).xlsxResource Description: Includes demographic data such as weight, height, gender, race, ethnicity, and year of birth.Resource Title: Questionnaires. File Name: questionnaires.xlsxResource Description: Questionnaire data regarding exercise self-efficacy using the Children's Self-Perceptions of Adequacy in and Predilection of Physical Activity Scale (CSAPPA), motivation for physical activity using the Behavioral Regulations in Exercise Questionnaire, 2nd edition (BREQ-2), motivation for active video games using modified questions from the BREQ-2 so that the question refers to motivation towards active video games rather than physical activity, motivation for sedentary video games using modified questions from the BREQ-2 so that the question refers to motivation towards sedentary video games behavior rather than physical activity, and physical activity-related parenting behaviors using The Activity Support Scale for Multiple Groups (ACTS-MG).

  2. Data from: Cardiopulmonary Resuscitation Performance: Video, Demographic and...

    • beta.ukdataservice.ac.uk
    Updated 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    datacite (2024). Cardiopulmonary Resuscitation Performance: Video, Demographic and Evaluation Data, 2023 [Dataset]. http://doi.org/10.5255/ukda-sn-857038
    Explore at:
    Dataset updated
    2024
    Dataset provided by
    DataCitehttps://www.datacite.org/
    UK Data Servicehttps://ukdataservice.ac.uk/
    Description

    This project aimed to establish a video database of cardiopulmonary resuscitation performance that demonstrates a range of expertise. The original data set contains 54 examples of participants who range in expertise and experience with performing CPR. Each example was recorded from 6 angles with a checkerboard in view to allow for 3D reconstruction. Participants were asked to perform 4 sets of 30 chest compressions with a short pause in between to rest. The faces of each participant have been blurred to reduce the likelihood of identification.

    The CPR performances are accompanied by the demographics of the participant and the evaluation data. The evaluation data consists of evaluation by two expert raters who teach Basic Life Support at a UK university, and their agreed rating.

    Participants were able to elect for their data to be included in the available database or restricted to the research team only. Consent was given for video data and evaluation data separately. Thus, this data contains video data from 41 participants, and evaluation data from 42 participants.

    This dataset is intended to be used to further understanding of expertise in CPR and facilitate the development of technology that can track movement and evaluate healthcare professional skills.

  3. m

    THVD (Talking Head Video Dataset)

    • data.mendeley.com
    Updated Apr 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mario Peedor (2025). THVD (Talking Head Video Dataset) [Dataset]. http://doi.org/10.17632/ykhw8r7bfx.2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Authors
    Mario Peedor
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    About

    We provide a comprehensive talking-head video dataset with over 50,000 videos, totaling more than 500 hours of footage and featuring 20,841 unique identities from around the world.

    Distribution

    Detailing the format, size, and structure of the dataset: Data Volume: -Total Size: 2.7TB

    -Total Videos: 47,547

    -Identities Covered: 20,841

    -Resolution: 60% 4k(1980), 33% fullHD(1080)

    -Formats: MP4

    -Full-length videos with visible mouth movements in every frame.

    -Minimum face size of 400 pixels.

    -Video durations range from 20 seconds to 5 minutes.

    -Faces have not been cut out, full screen videos including backgrounds.

    Usage

    This dataset is ideal for a variety of applications:

    Face Recognition & Verification: Training and benchmarking facial recognition models.

    Action Recognition: Identifying human activities and behaviors.

    Re-Identification (Re-ID): Tracking identities across different videos and environments.

    Deepfake Detection: Developing methods to detect manipulated videos.

    Generative AI: Training high-resolution video generation models.

    Lip Syncing Applications: Enhancing AI-driven lip-syncing models for dubbing and virtual avatars.

    Background AI Applications: Developing AI models for automated background replacement, segmentation, and enhancement.

    Coverage

    Explaining the scope and coverage of the dataset:

    Geographic Coverage: Worldwide

    Time Range: Time range and size of the videos have been noted in the CSV file.

    Demographics: Includes information about age, gender, ethnicity, format, resolution, and file size.

    Languages Covered (Videos):

    English: 23,038 videos

    Portuguese: 1,346 videos

    Spanish: 677 videos

    Norwegian: 1,266 videos

    Swedish: 1,056 videos

    Korean: 848 videos

    Polish: 1,807 videos

    Indonesian: 1,163 videos

    French: 1,102 videos

    German: 1,276 videos

    Japanese: 1,433 videos

    Dutch: 1,666 videos

    Indian: 1,163 videos

    Czech: 590 videos

    Chinese: 685 videos

    Italian: 975 videos

    Philipeans: 920 videos

    Bulgaria: 340 videos

    Romanian: 1144 videos

    Arabic: 1691 videos

    Who Can Use It

    List examples of intended users and their use cases:

    Data Scientists: Training machine learning models for video-based AI applications.

    Researchers: Studying human behavior, facial analysis, or video AI advancements.

    Businesses: Developing facial recognition systems, video analytics, or AI-driven media applications.

    Additional Notes

    Ensure ethical usage and compliance with privacy regulations. The dataset’s quality and scale make it valuable for high-performance AI training. Potential preprocessing (cropping, down sampling) may be needed for different use cases. Dataset has not been completed yet and expands daily, please contact for most up to date CSV file. The dataset has been divided into 100GB zipped files and is hosted on a private server (with the option to upload to the cloud if needed). To verify the dataset's quality, please contact me for the full CSV file.

  4. e

    Cardiopulmonary Resuscitation Performance: Video, Demographic and Evaluation...

    • b2find.eudat.eu
    Updated Dec 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Cardiopulmonary Resuscitation Performance: Video, Demographic and Evaluation Data, 2023 - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/d81d38aa-2afb-5507-b8cd-6528f13a345b
    Explore at:
    Dataset updated
    Dec 5, 2024
    Description

    This project aimed to establish a video database of cardiopulmonary resuscitation performance that demonstrates a range of expertise. The original data set contains 54 examples of participants who range in expertise and experience with performing CPR. Each example was recorded from 6 angles with a checkerboard in view to allow for 3D reconstruction. Participants were asked to perform 4 sets of 30 chest compressions with a short pause in between to rest. The faces of each participant have been blurred to reduce the likelihood of identification. The CPR performances are accompanied by the demographics of the participant and the evaluation data. The evaluation data consists of evaluation by two expert raters who teach Basic Life Support at a UK university, and their agreed rating. Participants were able to elect for their data to be included in the available database or restricted to the research team only. Consent was given for video data and evaluation data separately. Thus, this data contains video data from 41 participants, and evaluation data from 42 participants. This dataset is intended to be used to further understanding of expertise in CPR and facilitate the development of technology that can track movement and evaluate healthcare professional skills. Participants provided informed consent (see supplied Information Sheet, Consent form, and Debrief) Each person was recorded from 6 angles while performing 4 sets of 30 chest compressions on a manikin. Participants were recruited from the Department of Nursing and Midwifery and were either university staff or students. Demographics of the participants are provided. Two experts rated each set from each participant along an evaluative checklist (supplied). An overall rating for each participant was also provided. The raters initially rated alone and then resolved any discrepancies to provide an agreed rating. A more thorough description of methods has been supplied (Readme file). Participants evaluated their confidence in performing CPR (Very confident – very unconfident), and the frequency with which they practised CPR (Very frequently – very infrequently) along a 5-point Likert scale.

  5. Data from: Vimeo Dataset

    • brightdata.com
    .json, .csv, .xlsx
    Updated May 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bright Data (2024). Vimeo Dataset [Dataset]. https://brightdata.com/products/datasets/vimeo
    Explore at:
    .json, .csv, .xlsxAvailable download formats
    Dataset updated
    May 29, 2024
    Dataset authored and provided by
    Bright Datahttps://brightdata.com/
    License

    https://brightdata.com/licensehttps://brightdata.com/license

    Area covered
    Worldwide
    Description

    We'll customize a Vimeo dataset to align with your unique requirements, incorporating data on video titles, creator names, categories, view counts, likes, comments, demographic insights, and other relevant metrics. Leverage our Vimeo datasets for various applications to strengthen strategic planning and market analysis. Examining these datasets enables organizations to understand viewer preferences and streaming trends, facilitating refined content offerings and optimized marketing strategies. Tailor your access to the complete dataset or specific subsets according to your business needs. Popular use cases include optimizing video selections based on viewer insights, refining marketing strategies through targeted viewer segmentation, and identifying and predicting trends to maintain a competitive edge in the video streaming market.

  6. Data from: Quantifying the Size and Geographic Extent of CCTV's Impact on...

    • icpsr.umich.edu
    • datasets.ai
    • +1more
    Updated Aug 25, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ratcliffe, Jerry; Groff, Elizabeth (2017). Quantifying the Size and Geographic Extent of CCTV's Impact on Reducing Crime in Philadelphia, Pennsylvania, 2003-2013 [Dataset]. http://doi.org/10.3886/ICPSR35514.v1
    Explore at:
    Dataset updated
    Aug 25, 2017
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Ratcliffe, Jerry; Groff, Elizabeth
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/35514/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/35514/terms

    Time period covered
    Jan 2003 - Dec 2013
    Area covered
    Pennsylvania, Philadelphia
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed. This study was designed to investigate whether the presence of CCTV cameras can reduce crime by studying the cameras and crime statistics of a controlled area. The viewsheds of over 100 CCTV cameras within the city of Philadelphia, Pennsylvania were defined and grouped into 13 clusters, and camera locations were digitally mapped. Crime data from 2003-2013 was collected from areas that were visible to the selected cameras, as well as data from control and displacement areas using an incident reporting database that records the location of crime events. Demographic information was also collected from the mapped areas, such as population density, household information, and data on the specific camera(s) in the area. This study also investigated the perception of CCTV cameras, and interviewed members of the public regarding topics such as what they thought the camera could see, who was watching the camera feed, and if they were concerned about being filmed.

  7. o

    Video game players and their environmental perception and behaviors

    • osf.io
    Updated Apr 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manh-Toan Ho; Minh-Hoang Nguyen; Viet-Phuong La; Thanh-Hang Pham; Quan-Hoang Vuong (2022). Video game players and their environmental perception and behaviors [Dataset]. http://doi.org/10.17605/OSF.IO/P8U9C
    Explore at:
    Dataset updated
    Apr 1, 2022
    Dataset provided by
    Center For Open Science
    Authors
    Manh-Toan Ho; Minh-Hoang Nguyen; Viet-Phuong La; Thanh-Hang Pham; Quan-Hoang Vuong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Video gaming has been rising rapidly to become one of the primary entertainment media, especially during the COVID-19 pandemic. Playing video games has been reported to associate with many psychological and behavioral traits. However, little is known about the connections between game players' behaviors in the virtual environment and environmental perceptions. Thus, the current data set offers valuable resources regarding environmental worldviews and behaviors in the virtual world of 640 Animal Crossing: New Horizons (ACNH) game players from 29 countries around the globe. The data set consists of six major categories: 1) socio-demographic profile, 2) COVID-19 concern, 3) environmental perception, 4) game-playing habit, 5) in-game behavior, and 6) game-playing feeling. By making this data set open, we aim to provide policymakers, game producers, and researchers with valuable resources for understanding the interactions between behaviors in the virtual world and environmental perceptions, which could help produce video games in compliance with the United Nations (UN) Sustainable Development Goals.

    See more: https://doi.org/10.1162/dint_a_00111

    Other repository: Quan-Hoang Vuong; Manh-Toan Ho; Viet-Phuong La; Tam-Tri Le; Thanh Huyen T. Nguyen; Minh-Hoang Nguyen. A multinational dataset of game players’ behaviors in a virtual world and environmental perceptions(V1). 2021. Science Data Bank. 2021-10-09. cstr:31253.11.sciencedb.j00104.00098; https://datapid.cn/31253.11.sciencedb.j00104.00098

  8. Data from: A multimodal dataset of spontaneous speech and movement...

    • springernature.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Argiro Vatakis; Katerina Pastra (2023). A multimodal dataset of spontaneous speech and movement production on object affordances [Dataset]. http://doi.org/10.6084/m9.figshare.1463378
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Argiro Vatakis; Katerina Pastra
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the description of the multimodal dataset of spontaneous speech and movement production on object affordances. All the data can be found in the .rar files for each experiment conducted (refer to paper). This resource contains an excel file (Experimental_Information.xls) with information on: a) the participant’s assigned number and experiment (e.g., PN#_E#, where PN corresponds to the participant number and E to the experiment), which serves as a guide to the corresponding video, audio, and transcription files, b) basic demographic information (e.g., gender, age), and c) the available data files for each participant, details regarding their size (in mb) and duration (in secs), and potential problems with these files. These problems are mostly due to dropped frames in one of the cameras and in some rare cases missing files. The excel file is composed of three different sheets that correspond to the three different experiments conducted (refer to Methods section of paper). The audiovisual videos (.mp4), audio files (.aac), and transcription files (.trs) are organized by experiment and participant. Each participant file contains the frontal (F) and profile (P) video recordings (e.g., PN1_E1_F that refers to participant 1, experiment 1, frontal view) and the transcribed file along with the audio file. Also, the videos are labelled according to the condition with ‘NH’ when the object is in isolation, ‘H’ when the object is held by an agent, and ‘T’ when the actual, physical object is presented (e.g., PN1_E1_F_H.mp4 that refers to participant 1, experiment 1, frontal view, object held by an agent). These files are compressed in a .rar format.

  9. f

    Ly & Weary survey dataset.

    • plos.figshare.com
    txt
    Updated Jun 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lexis H. Ly; Daniel M. Weary (2023). Ly & Weary survey dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0247808.s004
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Lexis H. Ly; Daniel M. Weary
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data describing demographic factors of participants. (CSV)

  10. Survey Data Set Part 1 - Attitudes Towards Videos as a Documentation Option...

    • zenodo.org
    Updated Jul 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oliver Karras; Oliver Karras (2024). Survey Data Set Part 1 - Attitudes Towards Videos as a Documentation Option for Communication in Requirements Engineering [Dataset]. http://doi.org/10.5281/zenodo.3245770
    Explore at:
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Oliver Karras; Oliver Karras
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In 2017, we conducted an online survey to explore software professionals' attitudes towards videos as a documentation option for communication in requirements engineering. The survey covered the following topics:

    • Demographics
    • Attitude towards videos as a medium in RE including its strengths, weaknesses, opportunities, and threats
    • Current production and use of videos in RE, respectively the obstacles that prevent the production and use of videos

    64 out of 106 software professionals from industry and academia completed the survey. The survey was implemented in LimeSurvey and distributed across several communication channels such as LinkedIn, ResearchGate, and a mailing list of a German RE professionals group.

    This dataset includes the following files:

    This survey was designed, conducted, and analyzed by Oliver Karras (@KarrasOliver).

  11. Toadstool

    • kaggle.com
    Updated May 7, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hugo Hammer (2021). Toadstool [Dataset]. https://www.kaggle.com/hugohammer/toadstool/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 7, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Hugo Hammer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    About

    We present a dataset called Toadstool that aims to contribute to the field of reinforcement learning, multimodal data fusion, and the possibility of exploring emotionally aware machine learning algorithms. Furthermore, the dataset can also be useful to researchers interested in facial expressions, biometric sensors, sentiment analysis, and game studies. The dataset consists of video, sensor, and demographic data collected from ten participants playing a Super Mario Bros. The sensor data is collected through an Empatica E4 wristband, which provides high-quality measurements and is graded as a medical device. In addition to the dataset, we also present a set of baseline experiments which show that sensory input can be used to train fully autonomous agents, which, in this case, play a video game. We think that the presented dataset can be interesting for a manifold of researchers to explore different exciting questions.

    Terms of use

    The data is released fully open for research and educational purposes. The use of the dataset for purposes such as competitions and commercial purposes needs prior written permission. In all documents and papers that use or refer to the dataset or report experimental results based on Toadstool, a reference to the related article needs to be added: PREPRINT: https://osf.io/4v9mp.

    Ethics approval

    In this study, we used fully anonymized data approved by Privacy Data Protection Authority. Furthermore, we confirm that all experiments were performed in accordance with the relevant guidelines and regulations of the Regional Committee for Medical and Health Research Ethics - South East Norway, and the GDPR.

    Contact

    Email michael (at) simula (dot) no if you have any questions about the dataset and our research activities. We always welcome collaboration and joint research!

  12. T

    Community Embedded Robotics: Vid2Real An Online Video Dataset about...

    • dataverse.tdl.org
    mp4, pdf, png, tsv
    Updated Feb 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yao-Cheng Chan; Sadanand Modak; Elliott Hauser; Joydeep Biswas; Justin Hart; Yao-Cheng Chan; Sadanand Modak; Elliott Hauser; Joydeep Biswas; Justin Hart (2024). Community Embedded Robotics: Vid2Real An Online Video Dataset about Perceived Social Intelligence in Human Robot Encounters [Dataset]. http://doi.org/10.18738/T8/KAHJIB
    Explore at:
    mp4(12502479), png(3025116), png(3362601), pdf(49688), mp4(12229075), mp4(21185325), mp4(20896794), tsv(143673), pdf(98624)Available download formats
    Dataset updated
    Feb 14, 2024
    Dataset provided by
    Texas Data Repository
    Authors
    Yao-Cheng Chan; Sadanand Modak; Elliott Hauser; Joydeep Biswas; Justin Hart; Yao-Cheng Chan; Sadanand Modak; Elliott Hauser; Joydeep Biswas; Justin Hart
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Dataset funded by
    National Science Foundation
    Description

    Introduction This dataset was gathered during the Vid2Real online video-based study, which investigates humans’ perception of robots' intelligence in the context of an incidental Human-Robot encounter. The dataset contains participants' questionnaire responses to four video study conditions, namely Baseline, Verbal, Body language, and Body language + Verbal. The videos depict a scenario where a pedestrian incidentally encounters a quadruped robot trying to enter a building. The robot uses verbal commands or body language to try to ask for help from the pedestrian in different study conditions. The differences in the conditions were manipulated using the robot’s verbal and expressive movement functionalities. Dataset Purpose The dataset includes the responses of human subjects about the robots' social intelligence used to validate the hypothesis that robot social intelligence is positively correlated with human compliance in an incidental human-robot encounter context. The video based dataset was also developed to obtain empirical evidence that can be used to design future real-world HRI studies. Dataset Contents Four videos, each corresponding to a study condition. Four sets of Perceived Social Intelligence Scale data. Each set corresponds to one study condition Four sets of compliance likelihood questions, each set include one Likert question and one free-form question One set of Godspeed questionnaire data. One set of Anthropomorphism questionnaire data. A csv file containing the participants demographic data, Likert scale data, and text responses. A data dictionary explaining the meaning of each of the fields in the csv file. Study Conditions There are 4 videos (i.e. study conditions), the video scenarios are as follows. Baseline: The robot walks up to the entrance and waits for the pedestrian to open the door without any additional behaviors. This is also the "control" condition. Verbal: The robot walks up to the entrance, and says ”can you please open the door for me” to the pedestrian while facing the same direction, then waits for the pedestrian to open the door. Body Language: The robot walks up to the entrance, turns its head to look at the pedestrian, then turns its head to face the door, and waits for the pedestrian to open the door. Body Language + Verbal: The robot walks up to the entrance, turns its head to look at the pedestrian, and says ”Can you open the door for me” to the pedestrian, then waits for the pedestrian to open the door. Image showing the Verbal condition. Image showing the Body Language condition. A within-subject design was adopted, and all participants experienced all conditions. The order of the videos, as well as the PSI scales, were randomized. After receiving consent from the participants, they were presented with one video, followed by the PSI questions and the two exploratory questions (compliance likelihood) described above. This set was repeated 4 times, after which the participants would answer their general perceptions of the robot with Godspeed and AMPH questionnaires. Each video was around 20 seconds and the total study time was around 10 minutes. Video as a Study Method A video-based study in human-robot interaction research is a common method for data collection. Videos can easily be distributed via online participant recruiting platforms, and can reach a larger sample than in-person/lab-based studies. Therefore, it is a fast and easy method for data collection for research aiming to obtain empirical evidence. Video Filming The videos were filmed with a first-person point-of-view in order to maximize the alignment of video and real-world settings. The device used for the recording was an iPhone 12 pro, and the videos were shot in 4k 60 fps. For better accessibility, the videos have been converted to lower resolutions. Instruments The questionnaires used in the study include the Perceived Social Intelligence Scale (PSI), Godspeed Questionnaire, and Anthropomorphism Questionnaire (AMPH). In addition to these questionnaires, a 5-point Likert question and a free-text response measuring human compliance were added for the purpose of the video-based study. Participant demographic data was also collected. Questionnaire items are attached as part of this dataset. Human Subjects For the purpose of this project, the participants are recruited through Prolific. Therefore, the participants are users of Prolific. Additionally, they are restricted to people who are currently living in the United States, fluent in English, and have no hearing or visual impairments. No other restrictions were imposed. Among the 385 participants, 194 participants identified as female, and 191 as male, the age ranged from 19 to 75 (M = 38.53, SD = 12.86). Human subjects remained anonymous. Participants were compensated with $4 upon submission approval. This study was reviewed and approved by UT Austin Internal Review Board. Robot The dataset contains data about humans’ perceived...

  13. e

    Nottinghamer Korpus Deutscher YouTube-Sprache (The NottDeuYTSch Corpus)...

    • b2find.eudat.eu
    Updated Jul 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Nottinghamer Korpus Deutscher YouTube-Sprache (The NottDeuYTSch Corpus) (2022-07-27) - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/01aa923f-1901-5f6d-b327-30621ad28dc5
    Explore at:
    Dataset updated
    Jul 27, 2022
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    YouTube
    Description

    The NottDeuYTSch corpus contains over 33 million words taken from approximately 3 million YouTube comments from videos published between 2008 to 2018 targeted at a young, German-speaking demographic and represents an authentic language snapshot of young German speakers. The corpus was proportionally sampled based on video category and year from a database of 112 popular German-speaking YouTube channels in the DACH region for optimal representativeness and balance and contains a considerable amount of associated metadata for each comment that enable further longitudinal cross-sectional analyses.

  14. t

    Dataset for Vertical Jump Height Estimation from Depth Camera and Wearable...

    • researchdata.tuwien.ac.at
    zip
    Updated Feb 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Florian Wolling; Florian Wolling; Christoff Kügler; Christoff Kügler; Patrick Trollmann; Patrick Trollmann (2025). Dataset for Vertical Jump Height Estimation from Depth Camera and Wearable Accelerometer Motion Data [Dataset]. http://doi.org/10.48436/c0584-yqb91
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 6, 2025
    Dataset provided by
    TU Wien
    Authors
    Florian Wolling; Florian Wolling; Christoff Kügler; Christoff Kügler; Patrick Trollmann; Patrick Trollmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset for Vertical Jump Height Estimation from Depth Camera and Wearable Accelerometer Motion Data

    While the training of vertical jumps offers benefits for agility and performance across various amateur sports, the objective measurement of jump height remains a challenge compared to simpler assessments like the broad jump distance in a sand pit. Aiming at the estimation of the vertical jump height with easy-to-use and cost-efficient devices, we recorded a comprehensive dataset with an off-the-shelf depth camera and cost-efficient wearable motion sensors, equipped with an onboard three-axis accelerometer sensor. In our https://doi.org/10.1145/3701571.3701607" target="_blank" rel="noopener">publication, we assessed the accuracy achievable at diverse fiducial positions, which are 7 skeletal joints from the depth camera and 10 wearing positions of the sensing devices. The user study was conducted with 44 subjects (33 male, 11 female, 23.1 ± 2.2 years) performing five countermovement jumps each. In order to gather ground truth information, a conventional digital camera was used to document the jumps and the vertical hip displacement along a measuring tape. Thales’ theorem on proportionality was then applied to rectify the perspective displacement of the manual readings from the video footage.

    Context and Methodology

    • Dataset for research on the estimation of vertical jump height and in adjacent fields such as human activity recognition etc.
    • The dataset provides recordings from two easy-to-use and cost-efficient sensing modalities, the off-the-shelf depth camera Microsoft Azure Kinect and 10 wearable 3-axis accelerometers
    • The ground truth information are manually determined and rectified
    • The dataset was used in a https://doi.org/10.1145/3701571.3701607" target="_blank" rel="noopener">publication showing that the most accurate estimates of the depth camera were obtained from the pelvis and thoracic spine joint (error of -15.8 ± 23.3mm and 24.2 ± 35.1 mm) while the best estimates from the wearable motion sensor data were obtained from the neck position and the ankles (error of 18.8 ± 29.0mm and -4.8 ± 35.2 mm)
    • We encourage researchers to improve on our findings, e.g., by applying advanced machine learning techniques on the provided dataset

    Technical Details

    • A total of 220 recordings of countermovement jumps: 44 subjects, 5 jumps, two sensing modalities, manually determined and rectified ground truth
    • 44 subjects: 33 male and 11 female with an average age of 23.1 ± 2.2 years
    • Subjects gave written consent to provide the measurements for research purposes and publication (video recordings were deleted after ground truth determination)
    • Depth camera: Microsoft Azure Kinect, 3d-coordinates (x, y, and z) for all 32 joint positions recorded along with the accompanied timestamp, frame rate of 30 Hz
    • Wearable motion sensors: 10 wearing positions (lower neck, chest (sternum), hips (left and right), thighs (left and right), ankles (left and right), and wrists (left and right)), 3-axis accelerometer sensor data (x, y, and z) along with a timestamp for each sample, sampling rate of 100 Hz
    • Two folders 'depthcamera' and 'wearables', each containing 44 subfolders labeled with the subject ids '01' to '44'
    • Every subject's folder again contains 5 subfolders labeled with the jump ids 'j1' to 'j5' that contain the files of the countermovement jump recordings
    • The recordings are provided in both Python pickle files *.p as well as comma-separated value *.csv (';' as separator) files with the file names composed of subject id and jump id for *.p files, e.g., 's05_j3.p', as well as the joint or wearing position for the *.csv files, e.g., 's05_j3_hip_left.csv'
    • The pickle files have been tested successfully with Python 3.13.1 and NumPy 2.2.2
    • Unfortunately, for subject 02, the accelerometer data of neck and chest were not successfully recorded and, hence, the associated lists in the *.p and the *.csv files empty
    • Demographic information: a summary of the individual subjects' demographic information is available in the files 'subjects.p' or 'subjects.csv', providing the gender as female or male, age in years, height in cm, weight in kg, if they were conveyed by the lecture, if they were a student and, if so, of which degree
    • Ground truth: the manually determined and rectified ground truth information are provided in the files 'groundtruth.p' or 'groundtruth.csv', associated with the individual subject ids, providing jump number, jump height, and whether the jump is considered an overall well-executed jump
    • The dataset description at hand is also provided in the 'README.txt' file of the dataset's *.zip file
  15. g

    Data from: Longitudinal Analysis of Historical Demographic Data

    • search.gesis.org
    • openicpsr.org
    • +1more
    Updated May 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GESIS search (2021). Longitudinal Analysis of Historical Demographic Data [Dataset]. http://doi.org/10.3886/E34554V1
    Explore at:
    Dataset updated
    May 1, 2021
    Dataset provided by
    GESIS search
    ICPSR - Interuniversity Consortium for Political and Social Research
    License

    https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de452467https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de452467

    Description

    Abstract (en): This study contains teaching materials developed over a period of years for a four-week workshop, Longitudinal Analysis of Historical Demographic Data (LAHDD), offered through the ICPSR Summer Program in 2006, 2007, 2009, 2011 and 2013, with one-day alumni workshops in 2010, 2012, and 2014. Instructors in the workshops are listed below. Funding was provided by The Eunice Kennedy Shriver National Institute of Child Health and Human Development, grants R25-HD040525 and R25-HD-049479, the ICPSR Summer Program and the ICPSR Director. The course was designed to teach students the theories, methods, and practices of historical demography and to give them first-hand experience working with historical data. This training is valuable not only to those interested in the analysis historical data. The techniques of historical demography rest on methodological insights that can be applied to many problems in population studies and other social sciences. While historical demography remains a flourishing research area with publications in key journals like Demography, Population Studies, and Population, practitioners were dispersed, and training was not available at any of the population research centers in the U.S. or elsewhere. One hundred and ten participants from around the globe took part in the workshops, and have gone on to establish courses of their own or teach in other workshops. We offer these materials here in the hopes that others will find them useful in developing courses on historical demography and/or longitudinal data analysis. The workshop was organized in three tracks: A brief tour of historical demography, event-history analysis, and data management for longitudinal data using Stata and Microsoft Access. The data management track includes 13 exercises designed for hands-on learning and reinforcement. Included in this project are the syllabii and reading lists for the three tracks, datasets used in the exercises, documents setting out each exercise, a file with the expected results, and for many of the exercises, an explanation. Video tutorials helpful with the Access exercises are accessible from ICPSR's YouTube channel https://www.youtube.com/playlist?list=PLqC9lrhW1Vvb9M1QpQH23z9UlPYxHbUMF. Users are encouraged to use these materials to develop their own courses and workshops in any of the topics covered. Please acknowledge NICHD R25-HD040525 and R25-HD-049479 whenever appropriate. Historical demography instructors: Myron P. Gutmann, University of Colorado Boulder Cameron Campbell, Hong Kong University of Science and Technology J. David Hacker, University of Minnesota Satomi Kurosu, Reitaku University Katherine A. Lynch, Carnegie Mellon University Event history instructors: Cameron Campbell, Hong Kong University of Science and Technology Glenn Deane, State University of New York at Albany Ken R. Smith, Huntsman Cancer Institute and University of Utah Database management instructors: George Alter, University of Michigan Susan Hautaniemi Leonard, University of Michigan Teaching Assistants: Mathew Creighton, University of Massachusetts Boston Emily Merchant, University of Michigan Luciana Quaranta, Lund University Kristine Witkowski, University of Michigan Project Manager: Susan Hautaniemi Leonard, University of Michigan Funding insitution(s): United States Department of Health and Human Services. National Institutes of Health. Eunice Kennedy Shriver National Institute of Child Health and Human Development (R25 HD040525).

  16. Most used devices for digital videos in Finland 2023

    • statista.com
    • ai-chatbox.pro
    Updated Jul 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Most used devices for digital videos in Finland 2023 [Dataset]. https://www.statista.com/forecasts/1188063/most-used-devices-for-digital-videos-in-finland
    Explore at:
    Dataset updated
    Jul 9, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jan 2023 - Dec 2023
    Area covered
    Finland
    Description

    ** percent of Finnish respondents answer our survey on "Most used devices for digital videos" with "Smartphone". The survey was conducted in 2023, among ***** consumers.Find this and more survey data on most used devices for digital videos in our Consumer Insights tool. Filter by countless demographics, drill down to your own, hand-tailored target audience, and compare results across countries worldwide.

  17. f

    Demographic information on children in the collected home videos.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qandeel Tariq; Jena Daniels; Jessey Nicole Schwartz; Peter Washington; Haik Kalantarian; Dennis Paul Wall (2023). Demographic information on children in the collected home videos. [Dataset]. http://doi.org/10.1371/journal.pmed.1002705.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS Medicine
    Authors
    Qandeel Tariq; Jena Daniels; Jessey Nicole Schwartz; Peter Washington; Haik Kalantarian; Dennis Paul Wall
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We collected N = 193 (119 ASD, 74 non-ASD) home videos for analysis. We excluded 31 videos because of inadequate labeling or video quality. We used a randomly chosen 25 autism and 25 non-autism videos to empirically define an optimal number of raters. Video feature tagging for machine learning was then done on 162 home videos.

  18. e

    Media Analysis (MA 86) - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Oct 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Media Analysis (MA 86) - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/898fff32-b0c9-5647-afa5-2910092b1c2a
    Explore at:
    Dataset updated
    Oct 19, 2023
    Description

    Media usage of the population in 1985. Topics: size of circle of friend and acquaintances; frequency of conducting selected leisure activities; detailed determination of degree of familiarity and frequency of use of newspapers, magazines and inserts with radio and television schedule; detailed determination of use of radio and television regarding point in time, weekday and broadcaster; frequency and time interval of last trip to the movies; knowledge and use of reading circles; subscribing to a reading circle; place of reading publications from a reading circle; possession of a telephone; number of telephones and main extensions in the household; possession of bicycle; number of cars available to the household; number of cars with car radio; car radios with cassette player and radio traffic service decoder; number of television sets as well as type and features of the equipment; type of antenna connection; presence of a cable connection in the house or apartment; possibility to receive video and BTX; presence of personal computer, Btx-keyboard, video games and video record players; possession or planned acquisition of one or several video recorders; possession of a video camera; number of empty and recorded video cassettes in the household; number of video cassettes recorded oneself or purchased pre-recorded; place and frequency of renting video cassettes; video system; possession of stereo or hifi equipment and number of individual devices; number of selected devices of entertainment electronics in the household; possession of durable economic goods; having a yard; pets; public transportation close to home; residential status and number of renters in the building; building age; length of residence in the building; performing do-it-yourself and repair activities; attendance at sporting events; party preference; shopping habits in purchase of food and beverages; preferred type of business. Demography: age; sex; marital status; year of marriage and number of years of marriage; religious denomination; school education; vocational training; occupational position; employment; monthly net income; monthly net household income; income recipients in household; size of household; respondent is person managing household; respondent is head of household; characteristics of head of household; characteristics of person managing household; detailed demographic information on children in household; possession of drivers license. Interviewer rating: interest in survey topic and willingness of respondent to cooperate; length of interview; weekday of interview. Mediennutzung der Bevölkerung im Jahre 1985. Themen: Größe des Freundes- und Bekanntenkreises; Häufigkeit der Ausübung ausgewählter Freizeitaktivitäten; detaillierte Ermittlung des Bekanntheitsgrades und der Nutzungshäufigkeit von Zeitungen, Zeitschriften und Beilagen mit Radio- und Fernsehprogramm; detaillierte Ermittlung des Radio- und Fernsehkonsums bezüglich Zeitpunkt, Wochentag und Sendeanstalt; Häufigkeit und Zeitraum des letzten Kinobesuchs; Kenntnis und Nutzung von Lesemappen; Bezug einer Lesemappe; Leseorte von Lesemappen; Telefonbesitz; Anzahl der Telefone und Hauptanschlüsse im Haushalt; Zweiradbesitz; Anzahl der dem Haushalt zur Verfügung stehenden PKW; Anzahl der PKW mit Autoradio; Autoradios mit Kassettenteil und Verkehrsfunkdecoder; Anzahl der Fernsehgeräte sowie Art und Ausstattung der Geräte; Art des Antennenanschlusses; Vorhandensein eines Kabelanschlusses im Haus oder der Wohnung; Möglichkeit des Empfangs von Video- und Bildschirmtext; Vorhandensein von Heimcomputer, Btx-Tastatur, Telespielen und Bildplattenspieler; Besitz bzw. geplante Anschaffung eines oder mehrerer Videorecorder Besitz einer Videokamera; Anzahl der leeren und bespielten Video-Kassetten im Haushalt; Anzahl der selbst bzw. gekauft bespielten Videokassetten; Ort und Häufigkeit des Leihens von Videokassetten; Videosystem; Besitz von Stereo- bzw. Hifi-Anlagen und Anzahl der einzelnen Geräte; Anzahl ausgewählter Geräte der Unterhaltungselektronik im Haushalt; Besitz langlebiger Wirtschaftsgüter; Gartenbesitz; Haustiere; öffentliche Verkehrsmittel in Wohnungsnähe; Wohnstatus und Anzahl der Mietparteien im Haus; Gebäudealter; Wohndauer im Haus; Verrichtung von Heimwerker- und Reparaturtätigkeiten; Besuch von Sportveranstaltungen; Parteipräferenz; Einkaufsgewohnheiten beim Kauf von Lebensmitteln und Getränken; präferierter Geschäftstyp. Demographie: Alter; Geschlecht; Familienstand; Jahr der Eheschließung und Anzahl der Ehejahre; Konfession; Schulbildung; Berufsausbildung; berufliche Position; Berufstätigkeit; monatliches Netto-Einkommen; monatliches Netto-Haushaltseinkommen; Einkommensempfänger im Haushalt; Haushaltsgröße; Befragter ist haushaltsführende Person; Befragter ist Haushaltsvorstand; Charakteristika des Haushaltsvorstands; Charakteristika der haushaltsführenden Person; detaillierte demographische Angaben über Kinder im Haushalt; Führerscheinbesitz. Interviewerrating: Kooperationsbereitschaft und Interesse des Befragten am Befragungsthema; Interviewdauer; Wochentag des Interviews.

  19. Google's Diversity Annual Report Data

    • console.cloud.google.com
    Updated Mar 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    https://console.cloud.google.com/marketplace/browse?filter=partner:BigQuery%20Public%20Datasets%20Program&inv=1&invt=Ab2Vvw (2023). Google's Diversity Annual Report Data [Dataset]. https://console.cloud.google.com/marketplace/product/bigquery-public-datasets/google-diversity-annual-report
    Explore at:
    Dataset updated
    Mar 30, 2023
    Dataset provided by
    Googlehttp://google.com/
    BigQueryhttps://cloud.google.com/bigquery
    Description

    This dataset contains current and historical demographic data on Google's workforce since the company began publishing diversity data in 2014. It includes data collected for government reporting and voluntary employee self-identification globally relating to hiring, retention, and representation categorized by race, gender, sexual orientation, gender identity, disability status, and military status. In some instances, the data is limited due to various government policies around the world and the desire to protect Googler confidentiality. All data in this dataset will be updated yearly upon publication of Google’s Diversity Annual Report . Google uses this data to inform its diversity, equity, and inclusion work. More information on our methodology can be found in the Diversity Annual Report. This public dataset is hosted in Google BigQuery and is included in BigQuery's 1TB/mo of free tier processing. This means that each user receives 1TB of free BigQuery processing every month, which can be used to run queries on this public dataset. Watch this short video to learn how to get started quickly using BigQuery to access public datasets. What is BigQuery .

  20. Postnatal Affective MRI Dataset

    • openneuro.org
    Updated Sep 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PhD Heidemarie Laurent; Megan K. Finnegan; Katherine Haigler (2020). Postnatal Affective MRI Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds003136.v1.0.0
    Explore at:
    Dataset updated
    Sep 12, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    PhD Heidemarie Laurent; Megan K. Finnegan; Katherine Haigler
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Postnatal Affective MRI Dataset

    Authors Heidemarie Laurent, Megan K. Finnegan, and Katherine Haigler

    The Postnatal Affective MRI Dataset (PAMD) includes MRI and psych data from 25 mothers at three months postnatal, with additional psych data collected at three additional timepoints (six, twelve, and eighteen months postnatal). Mother-infant dyad psychosocial tasks and cortisol samples were also collected at all four timepoints, but this data is not included in this dataset. In-scanner tasks involved viewing own- and other-infant affective videos and viewing and labeling adult affective faces. This repository includes de-identified MRI, in-scanner task, demographic, and psych data from this study.

    Citation Laurent, H., Finnegan, M. K., & Haigler, K. (2020). Postnatal Affective MRI Dataset. OpenNeuro. Retrieved from OpenNeuro.org.

    Acknowledgments Saumya Agrawal was instrumental in getting the PAMD dataset into a BIDS-compliant structure.

    Funding This work was supported by the Society for Research in Child Development Victoria Levin Award "Early Calibration of Stress Systems: Defining Family Influences and Health Outcomes" to Heidemarie Laurent and by the University of Oregon College of Arts and Sciences

    Contact For questions about this dataset or to request access to alcohol- and tobacco-related psych data, please contact Dr. Heidemarie Laurent, hlaurent@illinois.edu.

    References Laurent, H. K., Wright, D., & Finnegan, M. K. (2018). Mindfulness-related differences in neural response to own-infant negative versus positive emotion contexts. Developmental Cognitive Neuroscience 30: 70-76. https://doi.org/10.1016/j.dcn.2018.01.002.

    Finnegan, M. K., Kane, S., Heller, W., & Laurent, H. (2020). Mothers' neural response to valenced infant interactions predicts postnatal depression and anxiety. PLoS One (under review).

    MRI Acquisition The PAMD dataset was acquired in 2015 at the University of Oregon Robert and Beverly Lewis Center for Neuroimaging with a 3T Siemens Allegra 3 magnet. A standard 32-channel phase array birdcage coil was used to acquire data from the whole brain. Sessions began with a shimming routine to optimize signal-to-noise ratio, followed by a fast localizer scan (FISP) and Siemens Autoalign routine, a field map, then the 4 functional runs and anatomical scan.

    Anatomical: T1*-weighted 3D MPRAGE sequence, TI=1100 ms, TR=2500 ms, TE=3.41 ms, flip angle=7°, 176 sagittal slices, 1.0mm thick, 256×176 matrix, FOV=256mm.

    Fieldmap: gradient echo sequence TR=.4ms, TE=.00738 ms, deltaTE=2.46 ms, 4mm thick, 64x64x32x2 matrix.

    Task: T2-weighted gradient echo sequence, TR=2000 ms, TE=30 ms, flip angle=90°, 32 contiguous slices acquired ascending and interleaved, 4 mm thick, 64×64 voxel matrix, 226 vols per run.

    Participants Mothers (n=25) of 3-month-old infants were recruited from the Women, Infants, and Children program and other community agencies serving low-income women in a midsize Pacific Northwest city. Mothers' ages ranged from 19 to 33 (M=26.4, SD=3.8). Most mothers were Caucasian (72%, 12% Latina, 8% Asian American, 8% other) and married or living with a romantic partner (88%). Although most reported some education past high school (84%), only 24% had completed college or received a graduate degree, and their median household income was between $20,000 and $29,999. For more than half of the mothers (56%), this was their first child (36% second child, 8% third child). Most infants were born on time (4% before 37 weeks and 8% after 41 weeks of pregnancy), and none had serious health problems. A vaginal delivery was reported by 56% of mothers, with 88% breastfeeding and 67% bed-sharing with their infant at the time of assessment. Over half of the mothers (52%) reported having engaged in some form of contemplative practice (mostly yoga and only 8% indicated some form of meditation), and 31% reported currently engaging in that practice. All women gave informed consent prior to participation, and all study procedures were approved by the University of Oregon Institutional Review Board. Due to a task malfunction, participant 178's scanning session was split over two days, with the anatomical acquired in ses-01, and the field maps and tasks acquired in ses-02.

    Study overview Mothers visited the lab to complete assessments at four timepoints postnatal: the first session occurred when mothers were approximately three months postnatal (T1), the second session at approximately six months postnatal (T2), the third session at approximately twelve months postnatal (T3), and the fourth and last session at approximately eighteen months postnatal (T4). MRI scans were acquired shortly after their first session (T1).

    Asssessment data Assessments collected during sessions include demographic, relationship, attachment, mental health, and infant-related questionnaires. For a full list of included measures and timepoints at which they were acquired, please refer to PAMD_codebook.tsv in the phenotype folder. Data has been made available and included in the phenotype folder as 'PAMD_T1_psychdata', 'PAMD_T2_psychdata', 'PAMD_T3_psychdata', 'PAMD_T4_psychdata'. To protect participants' privacy, all identifiers and questions relating to drugs or alcohol have been removed. If you would like access to drug- and alcohol-related questions, please contact the principle investigator, Dr. Heidemarie Laurent, to request access. Assessment data will be uploaded shortly.

    Post-scan ratings After the scan session, mothers watched all of the infant videos and rated the infant's and their own emotional valence and intensity for each video. For valence, mothers were asked "In this video clip, how positive or negative is your baby's emotion?" and "While watching this video clip, how positive or negative is your emotion? from -100 (negative) to +100 (positive). For emotional intensity, mothers were asked "In this video clip, how intense is your baby's emotion?" and "While watching this video clip, how intense is your emotion?"" on a scale of 0 (no intensity) to 100 (maximum intensity). Post-scan ratings are available in the phenotype folder as "PAMD_Post-ScanRatings."

    MRI Tasks

    Neural Reactivity to Own- and Other-Infant Affect

    File Name: task-infant 
    

    Approximately three months postnatal, a graduate research assistant visited mothers’ homes to conduct a structured clinical interview and video-record the mother interacting with her infant during a peekaboo and arm-restraint task, designed to elicit positive and negative emotions, respectively. The mother and infant were face-to-face for both tasks. For the peekaboo task, the mother covered her face with her hands and said "baby," then opened her hands and said "peekaboo" (Montague and Walker-Andrews, 2001). This continued for three minutes, or until the infant showed expressions of joy. For the arm-restraint task, the mother changed their baby's diaper and then held the infant's arms to their side for up to two minutes (Moscardino and Axia, 2006). The mother was told to keep her face neutral and not talk to her infant during this task. This procedure was repeated with a mother-infant dyad that were not included in the rest of the study to generate other-infant videos. Videos were edited to 15-second clips that showed maximum positive and negative affect. Presentation® software (Version 14.7, Neurobehavioral Systems, Inc. Berkeley, CA, www.neurobs.com) was used to present positive and negative own- and other-infant clips and rest blocks in counterbalanced order during two 7.5-minute runs. Participants were instructed to watch the videos and respond as they normally would without additional task demands. To protect participants' and their infants' privacy, infant videos will not be made publicly available. However, the mothers' post-scan rating of their infant's, the other infant's, and their own emotional valence and intensity can be found in the phenotype folder as "PAMD_Post-ScanRatings."

    Observing and Labeling Affective Faces

    File Name: task-affect 
    

    Face stimuli were selected from a standardized set of images (Tottenham, Borscheid, Ellersten, Markus, & Nelson, 2002). Presentation Software (version 14.7, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com) was used to show participants race-matched adult target faces displaying emotional expressions (positive: three happy faces; negative: one fear, one sad, one anger; two from each category were open-mouthed; one close-mouthed) and were instructed to "observe" or choose the correct affect label for the target image. In the observe task, subjects viewed an emotionally evocative face without making a response. During the affect-labeling task, subjects chose the correct affect label (e.g., "scared," "angry," "happy," "surprised") from a pair of words shown at the bottom of the screen (Lieberman et al., 2007). Each block was preceded by a 3-second instruction screen cueing participants for the current task ("observe" and "affect labeling") and consisted of five affective faces presented for 5 seconds each, with a 1- to 3-second jittered fixation cross between stimuli. Each run consisted of twelve blocks (six observe; six label) counterbalanced within the run and in a semi-random order of trials within blocks (no more than four in a row of positive or negative and, in the affect-labeling task, of the correct label on the right or left side).

    .Nii to BIDs

    The raw DICOMs were anonymized and converted to BIDS format using the following procedure (for more details, seehttps://github.com/Haigler/PAMD_BIDS/).

    1. Deidentifying DICOMS: Batch Anonymization of the DICOMS using DicomBrowser (https://nrg.wustl.edu/software/dicom-browser/)

    2. Conversion to .nii and BIDS structure: Anonymized DICOMs were converted to

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Agricultural Research Service (2025). Data from: The influence of active video game play upon physical activity and screen-based activities in sedentary children [Dataset]. https://catalog.data.gov/dataset/data-from-the-influence-of-active-video-game-play-upon-physical-activity-and-screen-based--33694

Data from: The influence of active video game play upon physical activity and screen-based activities in sedentary children

Related Article
Explore at:
Dataset updated
Jun 5, 2025
Dataset provided by
Agricultural Research Service
Description

Includes 24 hour recall data that children were instructed to fill-out describing the previous day’s activities at baseline, weeks 2 and 4 of the intervention, after the intervention (6 weeks), and after washout (10 weeks). Includes accelerometer data using an ActiGraph to assess usual physical and sedentary activity at baseline, 6 weeks, and 10 weeks. Includes demographic data such as weight, height, gender, race, ethnicity, and birth year. Includes relative reinforcing value data showing how children rated how much they would want to perform both physical and sedentary activities on a scale of 1-10 at baseline, week 6, and week 10. Includes questionnaire data regarding exercise self-efficacy using the Children’s Self-Perceptions of Adequacy in and Predilection of Physical Activity Scale (CSAPPA), motivation for physical activity using the Behavioral Regulations in Exercise Questionnaire, 2nd edition (BREQ-2), motivation for active video games using modified questions from the BREQ-2 so that the question refers to motivation towards active video games rather than physical activity, motivation for sedentary video games using modified questions from the BREQ-2 so that the question refers to motivation towards sedentary video games behavior rather than physical activity, and physical activity-related parenting behaviors using The Activity Support Scale for Multiple Groups (ACTS-MG). Resources in this dataset:Resource Title: 24 Hour Recall Data. File Name: 24 hour recalldata.xlsxResource Description: Children were instructed to fill out questions describing the previous day's activities at baseline, week 2, and week 4 of the intervention, after the intervention (6 weeks), and after washout (10 weeks).Resource Title: Actigraph activity data. File Name: actigraph activity data.xlsxResource Description: Accelerometer data using an ActiGraph to assess usual physical and sedentary activity at baseline, 6 weeks, and 10 weeks.Resource Title: Liking Data. File Name: liking data.xlsxResource Description: Relative reinforcing value data showing how children rated how much they would want to perform both physical and sedentary activities on a scale of 1-10 at baseline, week 6, and week 10.Resource Title: Demographics. File Name: Demographics (Birthdate-Year).xlsxResource Description: Includes demographic data such as weight, height, gender, race, ethnicity, and year of birth.Resource Title: Questionnaires. File Name: questionnaires.xlsxResource Description: Questionnaire data regarding exercise self-efficacy using the Children's Self-Perceptions of Adequacy in and Predilection of Physical Activity Scale (CSAPPA), motivation for physical activity using the Behavioral Regulations in Exercise Questionnaire, 2nd edition (BREQ-2), motivation for active video games using modified questions from the BREQ-2 so that the question refers to motivation towards active video games rather than physical activity, motivation for sedentary video games using modified questions from the BREQ-2 so that the question refers to motivation towards sedentary video games behavior rather than physical activity, and physical activity-related parenting behaviors using The Activity Support Scale for Multiple Groups (ACTS-MG).

Search
Clear search
Close search
Google apps
Main menu