12 datasets found
  1. A subset of CMU motion capture database

    • figshare.com
    bin
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qinkun Xiao (2023). A subset of CMU motion capture database [Dataset]. http://doi.org/10.6084/m9.figshare.3773109.v2
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Qinkun Xiao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is an index of subset in CMU motion database. The corresponding ‘*.bvh’ files can be downloaded from http://mocap.cs.cmu.edu/ . Please cite this paper: Qinkun Xiao, Junfang Li, Qinhan Xiao. “Human Motion Capture Data Retrieval Based on Quaternion and EMD”. International Conference on Intelligent Human-Machine Systems and Cybernetics, v 1, 2013, pp. 517–520.

  2. CMU MoCap Dataset as used in BeatGAN

    • kaggle.com
    zip
    Updated Jan 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MaximDolg (2022). CMU MoCap Dataset as used in BeatGAN [Dataset]. https://www.kaggle.com/datasets/maximdolg/cmu-mocap-dataset-as-used-in-beatgan
    Explore at:
    zip(271080 bytes)Available download formats
    Dataset updated
    Jan 3, 2022
    Authors
    MaximDolg
    Description

    This is a CSV raw version of the CMU MoCap dataset subset used in [Zhou et al., 2019]. There is no windowing, striding nor normalisation applied to the data. For more information concerning the structure of the data please see Beatgan Repo.

    Structure

    All of the data was concatenated into a single CSV data.csv. The provided labels.csv provides the labels for each data sample.

    Labels

    Zhou et al. use three classes for their dataset. The first class walking (labelled as 0) is considered the normal class and the jogging (labelled as 1) and jumping (labelled as 2) are considered abnormal classes.

    License Notes from the original dataset authors:

    Original Authors This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.

  3. Extended CMU MoCap dataset for BeatGAN

    • kaggle.com
    zip
    Updated Jan 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MaximDolg (2022). Extended CMU MoCap dataset for BeatGAN [Dataset]. https://www.kaggle.com/maximdolg/extended-cmu-mocap-dataset-for-beatgan
    Explore at:
    zip(488903 bytes)Available download formats
    Dataset updated
    Jan 12, 2022
    Authors
    MaximDolg
    Description

    This is a CSV raw version of the CMU MoCap dataset subset used in [Zhou et al., 2019]. There is no windowing, striding nor normalisation applied to the data. For more information concerning the structure of the data please see Beatgan Repo. The dataset is also extended by a new class dancing. For the unextended dataset please see here. The additional data can be found here here. For a list of the data actually processed please see processed_files.txt

    Structure

    All of the data was concatenated into a single CSV data.csv. The provided labels.csv provides the labels for each data sample.

    Labels

    Zhou et al. use three classes for their dataset. The first class walking (labelled as 0) is considered the normal class and the jogging (labelled as 1), jumping (labelled as 2) and dancing (labelled as 3) are considered abnormal classes.

    License Notes from the original dataset authors:

    Original Authors This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.

  4. CMU Mocap

    • kaggle.com
    zip
    Updated Apr 30, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K Scott Mader (2019). CMU Mocap [Dataset]. https://www.kaggle.com/kmader/cmu-mocap
    Explore at:
    zip(8506581814 bytes)Available download formats
    Dataset updated
    Apr 30, 2019
    Authors
    K Scott Mader
    License

    Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
    License information was derived automatically

    Description

    Context

    Content

    Acknowledgements

    http://mocap.cs.cmu.edu/

    This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.

    Inspiration

  5. CMU Motion Capture Walking Database (Vicon)

    • kaggle.com
    zip
    Updated Jan 16, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dasmehdixtr (2020). CMU Motion Capture Walking Database (Vicon) [Dataset]. https://www.kaggle.com/dasmehdixtr/cmu-motion-capture-walking-database
    Explore at:
    zip(11897363 bytes)Available download formats
    Dataset updated
    Jan 16, 2020
    Authors
    dasmehdixtr
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    This database a part of CMU Graphics Lab Motion Capture Database.This part only include "walking" move to make motion analysis.The data collected by vicon-marker system.The data format is c3d, asf/amc file.For more information please visit link.

    I uploaded this data just for show how to parse c3d/amc/asf files and make some experiment on it.

    Here is the link of database.

  6. Cmu_mocap

    • kaggle.com
    zip
    Updated Aug 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cristhian Kaori Valencia Marin (2020). Cmu_mocap [Dataset]. https://www.kaggle.com/cristhiankvalencia/cmu-mocap
    Explore at:
    zip(1172845557 bytes)Available download formats
    Dataset updated
    Aug 28, 2020
    Authors
    Cristhian Kaori Valencia Marin
    Description

    Dataset

    This dataset was created by Cristhian Kaori Valencia Marin

    Contents

  7. T

    rlu_locomotion

    • tensorflow.org
    Updated Nov 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). rlu_locomotion [Dataset]. https://www.tensorflow.org/datasets/catalog/rlu_locomotion
    Explore at:
    Dataset updated
    Nov 23, 2022
    Description

    RL Unplugged is suite of benchmarks for offline reinforcement learning. The RL Unplugged is designed around the following considerations: to facilitate ease of use, we provide the datasets with a unified API which makes it easy for the practitioner to work with all data in the suite once a general pipeline has been established.

    The datasets follow the RLDS format to represent steps and episodes.

    These tasks are made up of the corridor locomotion tasks involving the CMU Humanoid, for which prior efforts have either used motion capture data Merel et al., 2019a, Merel et al., 2019b or training from scratch Song et al., 2020. In addition, the DM Locomotion repository contains a set of tasks adapted to be suited to a virtual rodent Merel et al., 2020. We emphasize that the DM Locomotion tasks feature the combination of challenging high-DoF continuous control along with perception from rich egocentric observations. For details on how the dataset was generated, please refer to the paper.

    We recommend you to try offline RL methods on DeepMind Locomotion dataset, if you are interested in very challenging offline RL dataset with continuous action space.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('rlu_locomotion', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

  8. m

    Expert-Level Understanding of Social Scenes Requires Early Visual...

    • data.mendeley.com
    Updated Apr 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ehud zohary (2025). Expert-Level Understanding of Social Scenes Requires Early Visual Experience. Naveh et al. [Dataset]. http://doi.org/10.17632/m448rzky7y.1
    Explore at:
    Dataset updated
    Apr 25, 2025
    Authors
    ehud zohary
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We studied 28 late-sighted Ethiopian children who were born with bilateral cataracts and remained nearly-blind for years, recovering pattern-vision only at late childhood. This "natural experiment" offers a rare opportunity to assess the causal effect of early visual experience on later function acquisition. Here, we focus on vision-based understanding of human social-interactions. The late-sighted were poorer than typically-developing peers (albeit better than chance) in categorizing observed social scenes (as friendly or aggressive), irrespective of the display format (i.e. full-body videos, still images, or point-light displays). They were also impaired in recognizing single-person attributes which are useful for human interaction understanding (such as judging heading-direction based on biological-motion cues, or emotional states from body-posture gestures). Thus, comprehension of visually-observed socially-relevant actions and body gestures is impaired in the late-sighted. We conclude that early visual experience is necessary for developing the skills required for utilizing visual cues for social scene understanding.

    The dataset consists of the stimuli used for the experiments in the paper “Expert-Level Understanding of Social Scenes Requires Early Visual Experience”.

    The stimuli for experiments 1, 2 & 3 were animated based on motion capture data, obtained from three databases: mocap.cs.cmu.edu, created with funding from NSF EIA-0196217; motekentertainment.com; and the PLAViMoP database (https://plavimop.prd.fr/en/motions). Recorded trajectories were processed and retargeted on avatar models from the Mixamo dataset (https://www.mixamo.com), using the commercial software Autodesk Motion Builder (http://usa.autodesk.com), with different combinations of six avatars, downloaded from the Mixamo dataset (https://www.mixamo.com).

    The stimuli for experiment 4 are based on images from the BESST body postures pictures set (www.rub.de/neuropsy/BESST.html). The face in each image was obscured by a grey ellipse.

  9. h

    amp-dataset

    • huggingface.co
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artificial Mechanical Intelligence (2025). amp-dataset [Dataset]. https://huggingface.co/datasets/ami-iit/amp-dataset
    Explore at:
    Dataset updated
    Jun 3, 2025
    Dataset authored and provided by
    Artificial Mechanical Intelligence
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Retargeted Robot Motion Dataset

    This dataset provides retargeted motion capture sequences for a variety of robotic platforms. The motion data is derived from the CMU Motion Capture Database and includes a wide range of motion types beyond locomotion — such as gestures, interactions, and full-body activities. The data has been adapted to match the kinematic structure of specific robots, enabling its use in tasks such as:

    Imitation learning Reinforcement learning Motion analysis… See the full description on the dataset page: https://huggingface.co/datasets/ami-iit/amp-dataset.

  10. O

    Tongue Mocap dataset

    • opendatalab.com
    zip
    Updated Mar 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carnegie Mellon University (2023). Tongue Mocap dataset [Dataset]. https://opendatalab.com/OpenDataLab/Tongue_Mocap_dataset
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 23, 2023
    Dataset provided by
    Carnegie Mellon University
    Queen’s University
    Haskins Laboratories
    Epic Games
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    We collected a large-scale speech to tongue mocap dataset that focuses on capturing tongue, jaw, and lip motion during speech . This dataset enables research on data-driven techniques for realistic inner mouth animation. We present a method that leverages recent deep-learning based audio feature representations to build a robust and generalizable speech to animation pipeline.

  11. Z

    Data from: Kuopio gait dataset: motion capture, inertial measurement and...

    • data.niaid.nih.gov
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lavikainen, Jere; Vartiainen, Paavo; Stenroth, Lauri; Karjalainen, Pasi; Korhonen, Rami; Liukkonen, Mimmi; Mononen, Mika (2024). Kuopio gait dataset: motion capture, inertial measurement and video-based sagittal-plane keypoint data from walking trials [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10559503
    Explore at:
    Dataset updated
    Dec 16, 2024
    Dataset provided by
    University of Eastern Finland
    Kuopio University Hospital
    Authors
    Lavikainen, Jere; Vartiainen, Paavo; Stenroth, Lauri; Karjalainen, Pasi; Korhonen, Rami; Liukkonen, Mimmi; Mononen, Mika
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Kuopio
    Description

    This dataset contains motion capture (3D marker trajectories, ground reaction forces and moments), inertial measurement unit (wearable Movella Xsens MTw Awinda sensors on the pelvis, both thighs, both shanks, and both feet), and sagittal-plane video (anatomical keypoints identified with the OpenPose human pose estimation algorithm) data.The data is from 51 willing participants and collected in the HUMEA laboratory in the University of Eastern Finland, Kuopio, Finland, between 2022 and 2023. All trials were conducted barefoot.

    The file structure contains an Excel file containing information of the participants, data folders under each subject (numbered 01 to 51), and a MATLAB script.

    The Excel file has the following data for the participants:

    ID: ID of the participants from 1 to 51

    Age: age of the participant in years

    Gender: biological sex as M for male, F for female

    Leg: the participant's dominant leg, identified by asking which foot the participant would use to kick a football; R for right, L for left

    Height: height of the participant in centimeters

    Invalid_trials: list of invalid trials in the motion capture data (MOCAP) data, usually classified as such because the participant did not properly step on the middle force plate

    IAD: inter-asis distance in millimeters, the distance between palpated left and right anterior superior iliac spine, measured with a caliper

    Left_knee_width: width of the left knee from medial epicondyle to lateral epicondyle in millimeters, palpated and measured with a caliper

    Right_knee_width: same as above for the right knee

    Left_ankle width: width of the left ankle from medial malleolus to lateral malleolus in millimeters, palpated and measured with a caliper

    Right_ankle_width: same as above for the right ankle

    Left_thigh_length: the distance between the greater trochanter of the left femur and the lateral epicondyle of the left femur in millimeters, palpated and measured with a measuring tape

    Right_thigh_length: same as above for the right thigh

    Left_shank_length: the distance between the medial epicondyle of the femur and the medial malleolus of the tibia in millimeters, palpated and measured with a measuring tape

    Right_shank_length: same as above for the right shank

    Mass: mass in kilograms, measured on a force plate just before the walking measurements

    ICD: inter-condylar distance of the knee of the dominant leg, measured from low-field MRI

    Left_knee_width_mocap: distance between reflective MOCAP markers on the medial and lateral epicondyles of the knee in millimeters, measured from a static standing trial; -1 for missing (subject did not have those markers)

    Right_knee_width_mocap: same as above for the right knee

    The folders under each subject (folders numbered 01 to 51) are as follows:

    imu: "Raw" inertial measurement unit (IMU) data files that can be read with Xsens Device API (included in Xsens MT Manager 4.6, which may be unavailable these days, not sure). You won't need this if you use the data in the imu_extracted folder.

    imu_extracted: IMU data extracted from those data files using the Xsens Device API, so you don't have to.

    The data is saved as MATLAB structs where the fields are named as a sensor ID (e.g., "B42D48"). The sensor IDs and their corresponding IMU locations are as follows:

    pelvis IMU: B42DA3

    right femur IMU: B42DA2

    left femur IMU: B42D4D

    right tibia IMU: B42DAE

    left tibia IMU: B42D53

    right foot IMU: B42D48

    left foot IMU: B42D51 (except for subjects 01 and 02, where left foot IMU has the ID B42D4E)

    Some of the data are just zeros as they couldn't be read from these sensors, but under each sensor, the fields "calibratedAcceleration", "freeAcceleration", "time", "rotationMatrix", and "quaternion" contain usable data.

    time: Contains time stamps of the measurement at each frame recorded at 100 Hz, so if you remove the first value from all values in the time vector and divide the result by 100, you will get the time in seconds from the beginning of the walking trial.

    calibratedAcceleration and freeAcceleration: Contain triaxial acceleration data from the accelerometers of the IMU. freeAcceleration is just calibratedAcceleration without the effect of Earth's gravitational acceleration.

    rotationMatrix: Orientations of the IMU as rotation matrices.

    quaternion: Orientations of the IMU as quaternions.

    openpose: Trajectories of the keypoints identified from sagittal plane video frames, saved as json files.

    The keypoints are from the BODY_25 model of OpenPose (https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/md_doc_02_output.html).

    Each frame in the video has its own json file.

    You can use the function in the script "OpenPose_to_keypoint_table.m" in the root folder to read the keypoint trajectories and confidences of all frames in a walking trial into MATLAB tables. The function takes as argument the path to the folder containing the json files of the walking trial.

    Note that some subjects (11, 14, 37, 49) do not have keypoint and IMU data.

    The folders under each subject are divided into three ZIP archives with 17 subjects each.

    The script "OpenPose_to_keypoint_table.m" is a MATLAB script for extracting keypoint trajectories and confidences from JSON files into tables in MATLAB.

    Publication in Data in Brief: https://doi.org/10.1016/j.dib.2024.110841

    Contact: Jere Lavikainen, jere.lavikainen@uef.fi

  12. u

    Data from: Neural Policy Style Transfer with Twin-Delayed DDPG (NPST3)

    • produccioncientifica.ucm.es
    Updated 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fernandez-Fernandez, Raul; Aggravi, Marco; Victores, Juan G.; Giordano, Paolo Robuffo; Pacchierotti, Claudio; Fernandez-Fernandez, Raul; Aggravi, Marco; Victores, Juan G.; Giordano, Paolo Robuffo; Pacchierotti, Claudio (2021). Neural Policy Style Transfer with Twin-Delayed DDPG (NPST3) [Dataset]. https://produccioncientifica.ucm.es/documentos/668fc443b9e7c03b01bd8162
    Explore at:
    Dataset updated
    2021
    Authors
    Fernandez-Fernandez, Raul; Aggravi, Marco; Victores, Juan G.; Giordano, Paolo Robuffo; Pacchierotti, Claudio; Fernandez-Fernandez, Raul; Aggravi, Marco; Victores, Juan G.; Giordano, Paolo Robuffo; Pacchierotti, Claudio
    Description

    Neural Policy Style Transfer with Twin-Delayed DDPG (NPST3) dataset. The research leading to these results has received funding from: RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; ROBOASSET, ”Sistemas robóticos inteligentes de diagnóstico y rehabilitación de ter apias de miembro superior”, PID2020-113508RB-I00 funded by AGENCIA ESTATAL DE INVESTIGACION (AEI); and “Programa propio de investigación convocatoria de movilidad 2020” from Universidad Carlos III de Madrid. The original data used in this project was obtained from mocap.cs.cmu.edu. The original data was created with funding from NSF EIA-0196217.

  13. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Qinkun Xiao (2023). A subset of CMU motion capture database [Dataset]. http://doi.org/10.6084/m9.figshare.3773109.v2
Organization logoOrganization logo

A subset of CMU motion capture database

Explore at:
binAvailable download formats
Dataset updated
Jun 5, 2023
Dataset provided by
figshare
Figsharehttp://figshare.com/
Authors
Qinkun Xiao
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This is an index of subset in CMU motion database. The corresponding ‘*.bvh’ files can be downloaded from http://mocap.cs.cmu.edu/ . Please cite this paper: Qinkun Xiao, Junfang Li, Qinhan Xiao. “Human Motion Capture Data Retrieval Based on Quaternion and EMD”. International Conference on Intelligent Human-Machine Systems and Cybernetics, v 1, 2013, pp. 517–520.

Search
Clear search
Close search
Google apps
Main menu