Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an index of subset in CMU motion database. The corresponding ‘*.bvh’ files can be downloaded from http://mocap.cs.cmu.edu/ . Please cite this paper: Qinkun Xiao, Junfang Li, Qinhan Xiao. “Human Motion Capture Data Retrieval Based on Quaternion and EMD”. International Conference on Intelligent Human-Machine Systems and Cybernetics, v 1, 2013, pp. 517–520.
Facebook
TwitterThis is a CSV raw version of the CMU MoCap dataset subset used in [Zhou et al., 2019]. There is no windowing, striding nor normalisation applied to the data. For more information concerning the structure of the data please see Beatgan Repo.
All of the data was concatenated into a single CSV data.csv. The provided labels.csv provides the labels for each data sample.
Zhou et al. use three classes for their dataset. The first class walking (labelled as 0) is considered the normal class and the jogging (labelled as 1) and jumping (labelled as 2) are considered abnormal classes.
Original Authors This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.
Facebook
TwitterThis is a CSV raw version of the CMU MoCap dataset subset used in [Zhou et al., 2019]. There is no windowing, striding nor normalisation applied to the data.
For more information concerning the structure of the data please see Beatgan Repo.
The dataset is also extended by a new class dancing. For the unextended dataset please see here. The additional data can be found here here. For a list of the data actually processed please see processed_files.txt
All of the data was concatenated into a single CSV data.csv. The provided labels.csv provides the labels for each data sample.
Zhou et al. use three classes for their dataset. The first class walking (labelled as 0) is considered the normal class and the jogging (labelled as 1), jumping (labelled as 2) and dancing (labelled as 3) are considered abnormal classes.
Original Authors This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.
Facebook
TwitterAttribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This database a part of CMU Graphics Lab Motion Capture Database.This part only include "walking" move to make motion analysis.The data collected by vicon-marker system.The data format is c3d, asf/amc file.For more information please visit link.
I uploaded this data just for show how to parse c3d/amc/asf files and make some experiment on it.
Here is the link of database.
Facebook
TwitterThis dataset was created by Cristhian Kaori Valencia Marin
Facebook
TwitterRL Unplugged is suite of benchmarks for offline reinforcement learning. The RL Unplugged is designed around the following considerations: to facilitate ease of use, we provide the datasets with a unified API which makes it easy for the practitioner to work with all data in the suite once a general pipeline has been established.
The datasets follow the RLDS format to represent steps and episodes.
These tasks are made up of the corridor locomotion tasks involving the CMU Humanoid, for which prior efforts have either used motion capture data Merel et al., 2019a, Merel et al., 2019b or training from scratch Song et al., 2020. In addition, the DM Locomotion repository contains a set of tasks adapted to be suited to a virtual rodent Merel et al., 2020. We emphasize that the DM Locomotion tasks feature the combination of challenging high-DoF continuous control along with perception from rich egocentric observations. For details on how the dataset was generated, please refer to the paper.
We recommend you to try offline RL methods on DeepMind Locomotion dataset, if you are interested in very challenging offline RL dataset with continuous action space.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('rlu_locomotion', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We studied 28 late-sighted Ethiopian children who were born with bilateral cataracts and remained nearly-blind for years, recovering pattern-vision only at late childhood. This "natural experiment" offers a rare opportunity to assess the causal effect of early visual experience on later function acquisition. Here, we focus on vision-based understanding of human social-interactions. The late-sighted were poorer than typically-developing peers (albeit better than chance) in categorizing observed social scenes (as friendly or aggressive), irrespective of the display format (i.e. full-body videos, still images, or point-light displays). They were also impaired in recognizing single-person attributes which are useful for human interaction understanding (such as judging heading-direction based on biological-motion cues, or emotional states from body-posture gestures). Thus, comprehension of visually-observed socially-relevant actions and body gestures is impaired in the late-sighted. We conclude that early visual experience is necessary for developing the skills required for utilizing visual cues for social scene understanding.
The dataset consists of the stimuli used for the experiments in the paper “Expert-Level Understanding of Social Scenes Requires Early Visual Experience”.
The stimuli for experiments 1, 2 & 3 were animated based on motion capture data, obtained from three databases: mocap.cs.cmu.edu, created with funding from NSF EIA-0196217; motekentertainment.com; and the PLAViMoP database (https://plavimop.prd.fr/en/motions). Recorded trajectories were processed and retargeted on avatar models from the Mixamo dataset (https://www.mixamo.com), using the commercial software Autodesk Motion Builder (http://usa.autodesk.com), with different combinations of six avatars, downloaded from the Mixamo dataset (https://www.mixamo.com).
The stimuli for experiment 4 are based on images from the BESST body postures pictures set (www.rub.de/neuropsy/BESST.html). The face in each image was obscured by a grey ellipse.
Facebook
Twitterhttps://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Retargeted Robot Motion Dataset
This dataset provides retargeted motion capture sequences for a variety of robotic platforms. The motion data is derived from the CMU Motion Capture Database and includes a wide range of motion types beyond locomotion — such as gestures, interactions, and full-body activities. The data has been adapted to match the kinematic structure of specific robots, enabling its use in tasks such as:
Imitation learning Reinforcement learning Motion analysis… See the full description on the dataset page: https://huggingface.co/datasets/ami-iit/amp-dataset.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
We collected a large-scale speech to tongue mocap dataset that focuses on capturing tongue, jaw, and lip motion during speech . This dataset enables research on data-driven techniques for realistic inner mouth animation. We present a method that leverages recent deep-learning based audio feature representations to build a robust and generalizable speech to animation pipeline.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains motion capture (3D marker trajectories, ground reaction forces and moments), inertial measurement unit (wearable Movella Xsens MTw Awinda sensors on the pelvis, both thighs, both shanks, and both feet), and sagittal-plane video (anatomical keypoints identified with the OpenPose human pose estimation algorithm) data.The data is from 51 willing participants and collected in the HUMEA laboratory in the University of Eastern Finland, Kuopio, Finland, between 2022 and 2023. All trials were conducted barefoot.
The file structure contains an Excel file containing information of the participants, data folders under each subject (numbered 01 to 51), and a MATLAB script.
The Excel file has the following data for the participants:
ID: ID of the participants from 1 to 51
Age: age of the participant in years
Gender: biological sex as M for male, F for female
Leg: the participant's dominant leg, identified by asking which foot the participant would use to kick a football; R for right, L for left
Height: height of the participant in centimeters
Invalid_trials: list of invalid trials in the motion capture data (MOCAP) data, usually classified as such because the participant did not properly step on the middle force plate
IAD: inter-asis distance in millimeters, the distance between palpated left and right anterior superior iliac spine, measured with a caliper
Left_knee_width: width of the left knee from medial epicondyle to lateral epicondyle in millimeters, palpated and measured with a caliper
Right_knee_width: same as above for the right knee
Left_ankle width: width of the left ankle from medial malleolus to lateral malleolus in millimeters, palpated and measured with a caliper
Right_ankle_width: same as above for the right ankle
Left_thigh_length: the distance between the greater trochanter of the left femur and the lateral epicondyle of the left femur in millimeters, palpated and measured with a measuring tape
Right_thigh_length: same as above for the right thigh
Left_shank_length: the distance between the medial epicondyle of the femur and the medial malleolus of the tibia in millimeters, palpated and measured with a measuring tape
Right_shank_length: same as above for the right shank
Mass: mass in kilograms, measured on a force plate just before the walking measurements
ICD: inter-condylar distance of the knee of the dominant leg, measured from low-field MRI
Left_knee_width_mocap: distance between reflective MOCAP markers on the medial and lateral epicondyles of the knee in millimeters, measured from a static standing trial; -1 for missing (subject did not have those markers)
Right_knee_width_mocap: same as above for the right knee
The folders under each subject (folders numbered 01 to 51) are as follows:
imu: "Raw" inertial measurement unit (IMU) data files that can be read with Xsens Device API (included in Xsens MT Manager 4.6, which may be unavailable these days, not sure). You won't need this if you use the data in the imu_extracted folder.
imu_extracted: IMU data extracted from those data files using the Xsens Device API, so you don't have to.
The data is saved as MATLAB structs where the fields are named as a sensor ID (e.g., "B42D48"). The sensor IDs and their corresponding IMU locations are as follows:
pelvis IMU: B42DA3
right femur IMU: B42DA2
left femur IMU: B42D4D
right tibia IMU: B42DAE
left tibia IMU: B42D53
right foot IMU: B42D48
left foot IMU: B42D51 (except for subjects 01 and 02, where left foot IMU has the ID B42D4E)
Some of the data are just zeros as they couldn't be read from these sensors, but under each sensor, the fields "calibratedAcceleration", "freeAcceleration", "time", "rotationMatrix", and "quaternion" contain usable data.
time: Contains time stamps of the measurement at each frame recorded at 100 Hz, so if you remove the first value from all values in the time vector and divide the result by 100, you will get the time in seconds from the beginning of the walking trial.
calibratedAcceleration and freeAcceleration: Contain triaxial acceleration data from the accelerometers of the IMU. freeAcceleration is just calibratedAcceleration without the effect of Earth's gravitational acceleration.
rotationMatrix: Orientations of the IMU as rotation matrices.
quaternion: Orientations of the IMU as quaternions.
openpose: Trajectories of the keypoints identified from sagittal plane video frames, saved as json files.
The keypoints are from the BODY_25 model of OpenPose (https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/md_doc_02_output.html).
Each frame in the video has its own json file.
You can use the function in the script "OpenPose_to_keypoint_table.m" in the root folder to read the keypoint trajectories and confidences of all frames in a walking trial into MATLAB tables. The function takes as argument the path to the folder containing the json files of the walking trial.
Note that some subjects (11, 14, 37, 49) do not have keypoint and IMU data.
The folders under each subject are divided into three ZIP archives with 17 subjects each.
The script "OpenPose_to_keypoint_table.m" is a MATLAB script for extracting keypoint trajectories and confidences from JSON files into tables in MATLAB.
Publication in Data in Brief: https://doi.org/10.1016/j.dib.2024.110841
Contact: Jere Lavikainen, jere.lavikainen@uef.fi
Facebook
TwitterNeural Policy Style Transfer with Twin-Delayed DDPG (NPST3) dataset. The research leading to these results has received funding from: RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; ROBOASSET, ”Sistemas robóticos inteligentes de diagnóstico y rehabilitación de ter apias de miembro superior”, PID2020-113508RB-I00 funded by AGENCIA ESTATAL DE INVESTIGACION (AEI); and “Programa propio de investigación convocatoria de movilidad 2020” from Universidad Carlos III de Madrid. The original data used in this project was obtained from mocap.cs.cmu.edu. The original data was created with funding from NSF EIA-0196217.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an index of subset in CMU motion database. The corresponding ‘*.bvh’ files can be downloaded from http://mocap.cs.cmu.edu/ . Please cite this paper: Qinkun Xiao, Junfang Li, Qinhan Xiao. “Human Motion Capture Data Retrieval Based on Quaternion and EMD”. International Conference on Intelligent Human-Machine Systems and Cybernetics, v 1, 2013, pp. 517–520.