Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accompanying data for manuscript “Columnar clusters in the human motion complex reflect consciously perceived motion axis” written by Marian Schneider, Valentin Kemper, Thomas Emmerling, Federico De Martino, Rainer Goebel, submitted, November 2018.
Imaging files
-------------
* T1w and PDw images, only acquired in session 1
* 2 runs task-MotLoc, only acquired in session 2
* 5-6 runs task-ambiguous (called "Experiment 1" in accompanying manuscript, divided across 2 scanning sessions)
* 5-6 runs task-unambiguous (called "Experiment 2" in accompanying manuscript, divided across 2 scanning sessions)
Acquisition details
-------------------
For visualization of the functional results, we acquired scans with structural information in the first scanning session. At high magnetic fields, MR images exhibit high signal intensity variations that result from heterogeneous RF coil profiles. We therefore acquired both T1w images and PDw images using a magnetization-prepared 3D rapid gradient-echo (3D MPRAGE) sequence (TR: 3100 ms (T1w) or 1440 ms (PDw), voxel size = 0.6 mm isotropic, FOV = 230 x 230 mm2, matrix = 384 x 384, slices = 256, TE = 2.52 ms, FA = 5°). Acquisition time was reduced by using 3× GRAPPA parallel imaging and 6/8 Partial Fourier in phase encoding direction (acquisition time (TA): 8 min 49 s (T1w) and 4 min 6 s (PDw)).
To determine our region of interest, we acquired two hMT+ localiser runs. We used a 2D gradient echo (GE) echo planar imaging (EPI) sequence (1.6 mm isotropic nominal resolution; TE/TR = 18/2000 ms; in-plane field of view (FoV) 150×150 mm; matrix size 94 x 94; 28 slices; nominal flip angle (FA) = 69°; echo spacing = 0.71 ms; GRAPPA factor = 2, partial Fourier = 7/8; phase encoding direction head - foot; 240 volumes). We ensured that the area of acquisition had bilateral coverage of the posterior inferior temporal sulci, where we expected the hMT+ areas. Before acquisition of the first functional run, we collected 10 volumes for distortion correction - 5 volumes with the settings specified here and 5 more volumes with identical settings but opposite phase encoding (foot - head), here called "phase1" and "phase2".
For the sub-millimetre measurements (Experiments 1: here called "task-ambiguous" and Experiments 2: here called "task-unambiguous"), we used a 2D GE EPI sequence (TE/TR = 25.6/2000 ms; in-plane FoV 148×148 mm; matrix size 186 x 186; slices = 28; nominal FA = 69°; echo spacing = 1.05 ms; GRAPPA factor = 3, partial Fourier = 6/8; phase encoding direction head - foot; 300 volumes), yielding a nominal resolution of 0.8 mm isotropic. Placement of the small functional slab was guided by online analysis of the hMT+ localizer data recorded immediately at the beginning of the first session. This allowed us to ensure bilateral coverage of area hMT+ for every subject. In the second scanning session, the slab was placed using Siemens auto-align functionality and manual corrections. Before acquisition of the first functional run, we collected 10 volumes for distortion correction (5 volumes with opposite phase encoding: foot - head). During acquisition, runs for the ambiguous and unambiguous motion experiments were interleaved.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
EmokineDataset
Companion resources
Paper
Christensen, Julia F. and Fernandez, Andres and Smith, Rebecca and Michalareas, Georgios and Yazdi, Sina H. N. and Farahi, Fahima and Schmidt, Eva-Madeleine and Bahmanian, Nasimeh and Roig, Gemma (2024): "EMOKINE: A Software Package and Computational Framework for Scaling Up the Creation of Highly Controlled Emotional Full-Body Movement Datasets".
Code https://github.com/andres-fr/emokine
EmokineDataset is a pilot dataset showcasing the usefulness of the emokine software library. It featuers a single dancer performing 63 short sequences, which have been recorded and analyzed in different ways. This pilot dataset is organized in 3 folders:
Stimuli: The sequences are presented in 4 visual presentations that can be used as stimulus in observer experiments:
Silhouette: Videos with a white silhouette of the dancer on black background.
FLD (Full-Light Display): video recordings with the performer's face blurred out.
PLD (Point-Light Display): videos featuring a black background with white circles corresponding to the selected body landmarks.
Avatar: Videos produced by the XSENS motion capture propietary software, featuring a robot-like avatar performing the captured movements on a light blue background.
Data: In order to facilitate computation and analysis of the stimuli, this pilot dataset also includes several data formats:
MVNX: Raw motion capture data directly recorded from the XSENS motion capture system.
CSV: Translation of a subset of the MVNX sequences into CSV, included for easier integration with mainstream analysis software tools). The subset includes the following features: acceleration, angularAcceleration, angularVelocity, centerOfMass, footContacts, orientation, position and velocity.
CamPos: While the MVNX provides 3D positions with respect to a global frame of reference, the CamPos JSON files represent the position from the perspective of the camera used to render the PLD videos. Specifically, their 3D positions are given with respect to the camera as (x, y, z), where (x, y) go from (0, 0) (left, bottom) to (1, 1) (right, top), and z is the distance between the camera and the point in meters. It can be useful to get a 2-dimensional projection of the dancer position (simply ignore z).
Kinematic: Analysis of a selection of relevant kinematic features, using information from MVNX, Silhouette and CamPos, provided in tabular form.
Validation: Data and experiments reported in our paper as part of the data validation, to support its meaningfulness and usefulness for downstream tasks.
TechVal: A collection of plots presenting relevant statistics of the pilot dataset.
ObserverExperiment: Results in tabular form of an online study conducted with human participants, tasked to recognize emotions of the stimuli and rate their beauty.
More specifically, the 63 unique sequences are divided into 9 unique choreographies, each one being performed once as an explanation, and then 6 times with different intended emotions (angry, content, fearful, joy, neutral and sad). Once downloaded, the pilot dataset should have the following structure:
EmokineDataset├── Stimuli│ ├── Avatar│ ├── FLD│ ├── PLD│ └── Silhouette├── Data│ ├── CamPos│ ├── CSV│ ├── Kinematic│ ├── MVNX│ └── TechVal└── Validation ├── TechVal └── ObserverExperiment
Where each of the stimuli, MVNX, CamPos and Kinematic have this structure:
├── explanation│ ├── _seq1_explanation.│ ├── ...│ └── _seq9_explanation.├── _seq1_angry.├── _seq1_content.├── _seq1_fearful.├── _seq1_joy.├── _seq1_neutral.├── _seq1_sad....└── _seq9_sad.
The CSV directory is slightly different, because instead of a single file for each seq and emotion, it features a folder containing a .csv file for each one of the 8 features being extracted (acceleration, velocity...).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains motion capture data collected from 11 healthy subjects performing various walking tasks using IMU-based sensors. Each subject performed 8 different tasks under the following conditions:1. Normal walk2. Fast walk3. Normal walk while holding a 1 kg weight with the dominant hand4. Fast walk while holding a 1 kg weight with the dominant hand5. Normal walk with a knee brace on one leg6. Fast walk with a knee brace on one leg7. Normal walk with a knee brace while holding a 1 kg weight (a combination of Task 3 and Task 5)8. Fast walking with a knee brace while holding a 1 kg weight (a combination of Task 4 and Task 6)Data CollectionThe data was collected using a commercial IMU-based motion capture system, with 10 modules worn on the following body parts:- Left foot- Right foot- Left shank- Right shank- Left thigh- Right thigh- Left arm- Right arm- Trunk- PelvisEach module recorded the following data along X, Y, and Z axes:- Accelerometer data- Gyroscope data- Magnetometer dataSampling Rate:- The data sampling rate is 4 ms for all subjects except sub_01 and sub_03, where the sampling rate is 6 ms.- In certain rows of the files, there are irregularities in the recorded time. This occurs when the time value reaches 65,535 or multiples of it (e.g., 131,070 and 196,605). This problem is associated with the way time is displayed and does not impact the sample rate.Data StructureThe dataset is organized into the following folders for each subject.Subject folders:- sub_01- sub_02- sub_03- sub_04- sub_05- sub_06- sub_07- sub_08- sub_09- sub_10 (Note: Task 2 is missing from sub_10 folder)- sub_11Task folders within each subject’s folder include:- 1_walking normal- 2_walking fast- 3_weight normal- 4_weight fast- 5_brace normal- 6_brace fast- 7_brace weight normal- 8_brace weight fastEach trial folder contains four CSV files, named according to the trial condition. For example, for the first trial, the files are:- walking normal_Raw.csv- walking normal_Processed.csv- walking normal_Euler.csv- walking normal_JointsKinematics.csvCSV File Descriptions1. Raw: Contains raw sensor data.- Time (ms)- Accelerometer data (X, Y, Z)- Gyroscope data (X, Y, Z)- Magnetometer data (X, Y, Z)2. Processed: Contains preprocessed data.- Time (ms)- Quaternion components (Q0, Q1, Q2, Q3)- Acceleration in IMU coordinate system (X, Y, Z)- Linear acceleration without gravity (X, Y, Z)- Acceleration in the global coordinate system (X, Y, Z)3. Euler: Contains Euler angles.- Time (ms)- Roll, Pitch, and Yaw angles4. Joints Kinematics: Contains joint angle data.- Time (ms)- Abduction-Adduction angle- Internal-External Rotation angle- Flexion-Extension angleColumn LabelsRaw Data File:- Time_LeftFoot, AccX_LeftFoot, AccY_LeftFoot, AccZ_LeftFoot, GyroX_LeftFoot, GyroY_LeftFoot, GyroZ_LeftFoot, MagX_LeftFoot, MagY_LeftFoot, MagZ_LeftFoot, *Processed Data File:- Time_LeftFoot, Q0_LeftFoot, Q1_LeftFoot, Q2_LeftFoot, Q3_LeftFoot, Acc_X_LeftFoot, Acc_Y_LeftFoot, Acc_Z_LeftFoot, Acc_linX_LeftFoot, Acc_linY_LeftFoot, Acc_linZ_LeftFoot, Acc_GlinX_LeftFoot, Acc_GlinY_LeftFoot, Acc_GlinZ_LeftFoot, *Euler Data File:- Time_LeftFoot, Roll_LeftFoot, Pitch_LeftFoot, Yaw_LeftFoot, *Joints Kinematics Data File:- Time_LeftAnkle, Abduction-Adduction_LeftAnkle, Internal-External Rotat_LeftAnkle, Flexion-Extension_LeftAnkle, Additional Notes- This dataset can be used for research in biomechanics, rehabilitation, and human motion analysis.* (Similar pattern for RightFoot, LeftShank, RightShank, LeftThigh, RightThigh, LeftHumerus, RightHumerus, Pelvic, Trunk) (Similar pattern for RightAnkle, LeftKnee, RightKnee, LeftHip, RightHip, LeftShoulder, RightShoulder, Pelvic, Trunk2Ground)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Retargeted AMASS for Robotics
Project Overview
This project aims to retarget motion data from the AMASS dataset to various robot models and open-source the retargeted data to facilitate research and applications in robotics and human-robot interaction. AMASS (Archive of Motion Capture as Surface Shapes) is a high-quality human motion capture dataset, and the SMPL-X model is a powerful tool for generating realistic human motion data. By adapting the motion data from AMASS… See the full description on the dataset page: https://huggingface.co/datasets/fleaven/Retargeted_AMASS_for_bxi_elf2.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
850-200 hPa environmental shear and motion vectors of 72 landfalling tropical cyclones presented in Trier et al. 2023.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
ALMI-X
🌍website 📊code 📖paper
Overview
We release a large-scale whole-body motion control dataset - ALMI-X, featuring high-quality episodic trajectories from MuJoCo simulations deployable on real robots, based on our humanoid control policy - ALMI.
Dataset Instruction
We collect ALMI-X dataset in MuJoCo simulation by running the trained ALMI policy. In this simulation, we combine a diverse range of upper-body motions with omnidirectional lower-body… See the full description on the dataset page: https://huggingface.co/datasets/TeleEmbodied/ALMI-X.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
For details on using the data and on available helper codes, refer to https://gait-tech.github.io/gaittoolbox/. The experiment had nine healthy subjects (7 men and 2 women, weight 63.0 ± 6.8 kg, height 1.70 ± 0.06 m, age 24.6 ± 3.9 years old). Our system was compared to two benchmark systems, namely the Vicon and Xsens systems. The Vicon Vantage system consisted of eight cameras covering approximately 4 x 4 m^2 capture area with millimetre accuracy. Vicon data were captured at 100 Hz and processed using Nexus 2.7 software. The Xsens Awinda system consisted of seven MTx units (IMUs). Xsens data were captured at 100 Hz using MT Manager 4.8 and processed using MVN Studio 4.4 software. The Vicon and Xsens recordings were synchronized by having the Xsens Awinda station send a trigger pulse to the Vicon system at the start and stop event of each recording. Each subject had reflective Vicon markers placed according to the Helen-Hayes 16 marker set, seven MTx units attached to the pelvis, thighs, shanks, and feet according to standard Xsens sensor placement, and two MTx units attached near the ankles. Each subject performed the movements listed in the Table below twice (i.e., two trials). The subjects stood still before and after each trial for ten seconds. The experiment was approved by the Human Research Ethics Board of the University of New South Wales (UNSW) with approval number HC180413. TABLE I: TYPES OF MOVEMENTS DONE IN THE VALIDATION EXPERIMENT Duration (s) Static: Stand still (~10s) Walk: Walk straight and back (~30s) Figure of eight: Walk in figures of eight (~60s) Zig-zag: Walk zigzag (~60s) 5-minute walk: Undirected walk, side step, and stand (~300s) Speedskater: Speedskater on the spot (~30s) Jog: Jog straight and return (~30s) Jumping jacks: Jumping jacks on the spot (~30s) High knee: High knee jog straight and return (~30s) TUG: Timed up and go (~30s) neura-sparse.zip contains the main data (cut and aligned). neura-sparse-raw.zip contains the full trial. Known Issues: Subject 10's left and right foot sensors were accidentally swapped. All have been fixed except for the xsens BVH output. from the MVN software. Note that Xsens MVN 2019.0, in theory, has the capability to swap sensors in reconstruction. However, the software crashes whenever I attempt to do so. I could not use a newer Xsens MVN version due to license limitation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This upload consist of a dataset of recorded Hand Palm Motion (HPM) gestures. The motions within this HPM dataset involve rotational movements, translational movements, or a combination of both performed by the palm of the hand. These motions were designed for the frame-invariant gesture recognition Proof of Concept, shown in the following video. In this PoC, the goal is to control the movement of the end-effector and gripper fingers of a manipulator arm through hand palm motion gestures. Specifically, the aim is to maintain a high recognition performance when challenged by significant variations in both the tracker reference frame and the sensor reference frame.
Seven hand palm motion gestures were designed. These gestures are explained below. A figure (gestures.svg) of these gestures is also included.
These hand palm motion gestures are easy to perform, which ensures accessibility for users. Additionally, the gestures were carefully designed such that distinguishing the gestures does not rely on specific coordinate reference frames or directions. That is, the shape of the motion (rectilinear or circular translation, pure rotation, pure screw motion, etc.) contains sufficient information for this distinction.
To reduce transition effects between gestures that are performed successively, all gestures, except for Go Right, were designed to include the motions as explained above, followed by their reverse motions. Hence, after each gesture, the hand returned to the same pose it started from.
The hand palm motions were recorded using an HTC Vive motion capture system, where the user's hand motion was captured by holding an HTC Vive tracker. The HTC Vive system recorded the orientation and position of the tracker with an accuracy of a few degrees and a few millimeters, respectively. The orientation and position trajectories of the tracker were retained as quaternion coordinates and 3D position coordinates sampled at a frequency of 50 Hz. For each of the seven gestures, five trials were recorded, resulting in a total of 7x5=35 recordings.
To introduce the challenge of dealing with contextual variations when recognizing motions, the HPM dataset is augmented toward 420 trials by artificially transforming and perturbing the recorded trajectory data. Specifically, twelve different contexts were designed:
To prevent that the data samples from the contexts Original 2 and First Half 'exactly' match those from the context Original 1, small perturbations were introduced by adding white noise with standard deviations of 1 mm and 1° to the position and orientation trajectories, respectively. For consistency reasons, this noise perturbation was applied to every trial of each context.
Within this dataset, every trial_x.csv file is a Comma-Separated Values (CSV) file. The trailing number x refers to the order in which the trials were performed. The file trial_x.csv has the following columns:
The design of these hand palm motion gestures and the development of this dataset is one of the contributions of the work in [link]. This work is submitted to the 2025 IEEE Conference on Automation Science and Engineering (CASE). If you use this HPM dataset, please cite it as follows:
@misc{verduyn2025,
title={Enhancing Hand Palm Motion Gesture Recognition by Eliminating Reference Frame Bias via Frame-Invariant Similarity Measures},
author={Arno Verduyn and Maxim Vochten and Joris De Schutter},
year={2025},
eprint={2503.11352},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2503.11352},
}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Presentation
UMONS-TAICHI is a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a [0-10] scale. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys, a sophisticated motion capture system of 11 Oqus cameras that tracked 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless sensor that tracked 25 locations of a person’s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this database. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) on a part of the dataset to extract morphology-independent motion features for gesture skill evaluation and presented in: “Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation” (https://doi.org/10.1145/3077981.3078037).
Processing
Qualisys
Qualisys data were processed manually with Qualisys Track Manager.
Missing data (occluded markers) were then recovered with an automatic recovery method: MocapRecovery.
Data were annotated for gesture segmentation, using the MotionMachine framework (C++ openFrameworks addon). The code for annotation can be found here. Annotations were saved as ".lab" files (see Download section).
Kinect
The Kinect data were recorded with Kinect Studio. Skeleton data were then extracted with Kinect SDK and saved into “.txt” files which contain several lines corresponding to each captured frame. Each line contains one integer number (ms), relative to the moment when the frame was captured, followed by 3 x 25 float numbers corresponding to the 3-dimentional locations of the 25 body joints.
For more information please visit https://github.com/numediart/UMONS-TAICHI
PS: All files can be used with the MotionMachine framework. Please use the parser provided in this github repository for kinect (.txt) data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Otolith volume and estimated otolith mass in the two specimens subjected to tomography at ID17.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview of sound pressure levels (SPLs) measured at the centre of the tank and the difference of SPLs and particle acceleration levels (PALs) between the in phase (0°) and out of phase (180°) condition.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Motion duration T, translation x, maximum velocity vmax, and maximum acceleration amax, for experiment I and II.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains two experimentally measured X-ray scans, one reference scan and one scan in which the object translates while rotating. The reference scan consists of 3600 projection images, with one flat field and dark field image. The roto-translational scan consists of 360 projections, also with a flat field and dark field image. The acquisition files containing all relevant specifications of the scanner and the acquisition settings are also supplied.
The code for reconstruction is available at GitHub.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Localization Dataset README
In this, data for BLE and UWB calibration is included through the use of UWB and OptiTrack Motion capture system respectively. There are two sets of data each covering two scenarios; these being walking and trolley.
The two groups containing the data are laid out identically as indicated below.
Bluetooth Low Energy / Ultra-Wideband
| Session ID | 8 x AoA | 8 x RSSI | BLE x | BLE y | UWB x | UWB y |
Where :
Number of Samples (BLE/UWB)
The datasets contains the following number of samples:
Ultra-Wideband / OptiTrack Motion Capture
| Session ID | 4 x CIR | 4 x PSA | Distance | UWB x | UWB y | OPT x | OPT y |
Where:
Number of Samples (UWB/OPT)
Total Number of Samples : 15751
[1]: A Khan, T Farnham, R Kou, U Raza, T Premalal, A Stanoev, W Thompson, "Standing on the Shoulders of Giants: AI-driven Calibration of Localisation Technologies", IEEE Global Communications Conference (GLOBECOM) 2019
These data were generated to evaluate the accuracy of DeepLabCut (DLC), a deep learning marker-less motion capture approach, by comparing it to a 3D x-ray video radiography system that tracks markers placed under the skin (XROMM). We recorded behavioral data simultaneously with XROMM and RGB video as marmosets foraged and reconstructed three-dimensional kinematics in a common coordinate system. We used XMALab to track 11 XROMM markers, and we used the toolkit Anipose to filter and triangulate DLC trajectories of 11 corresponding markers on the forelimb and torso. We performed a parameter sweep of relevant Anipose and post-processing parameters to characterize their effect on tracking quality. We compared the median error of DLC+Anipose to human labeling performance and placed this error in the context of the animal's range of motion. Â
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global motion capture jacket market is poised for significant growth, with the market size projected to increase from $XX billion in 2023 to $XX billion by 2032, at a compound annual growth rate (CAGR) of X%. The rising demand for immersive user experiences in various entertainment and medical applications is a major growth driver for this market. The integration of advanced sensor technologies and the increasing application of motion capture jackets in diverse fields such as sports, military, and biomedical research further catalyze this market expansion.
The growth of the motion capture jacket market is largely driven by advancements in technology, particularly in sensor accuracy and wireless communication. Over the past few years, motion capture technology has evolved significantly, allowing for more precise and reliable data capture. This improvement in technology enables wider applications and drives adoption across various industries, such as film, gaming, and medical research. The continuous enhancement of motion sensors and the reduction in their costs are expected to fuel the market growth further.
Another crucial growth factor is the increasing demand for realistic animations and special effects in the entertainment sector. Motion capture jackets are extensively used in the production of films, TV shows, and video games to create lifelike animations and effects. As consumers demand more immersive and high-quality content, the adoption of motion capture technology in the entertainment industry is expected to soar. Additionally, the rising popularity of virtual reality (VR) and augmented reality (AR) experiences is likely to boost the demand for motion capture jackets, as they are essential for creating interactive and engaging VR/AR content.
The healthcare sector is also contributing to the growth of the motion capture jacket market. Motion capture technology is increasingly being used in medical applications such as biomechanical research, physical rehabilitation, and sports medicine. By analyzing detailed movement data, healthcare professionals can develop more effective treatment plans and improve patient outcomes. The growing awareness of the benefits of motion analysis and the increasing focus on personalized medicine are expected to drive the adoption of motion capture jackets in the healthcare sector.
Optical Motion Capture Gloves are emerging as a significant innovation in the realm of motion capture technology. These gloves utilize a network of sensors and cameras to accurately capture hand and finger movements, providing an unparalleled level of detail and precision. This technology is particularly beneficial in fields such as animation and virtual reality, where the subtleties of hand gestures can greatly enhance the realism and interactivity of digital environments. The integration of optical motion capture gloves into existing systems allows for more comprehensive motion tracking, offering new possibilities for creative expression and user engagement. As the demand for high-fidelity motion data continues to grow, optical motion capture gloves are expected to play a crucial role in advancing the capabilities of motion capture solutions across various industries.
Regionally, North America and Europe are the largest markets for motion capture jackets, driven by the presence of major entertainment companies and advanced healthcare infrastructure. However, the Asia Pacific region is expected to witness the highest growth during the forecast period, owing to the rapid adoption of new technologies and the growing entertainment industry. The increasing investments in research and development, along with the rising disposable incomes in emerging economies like India and China, are likely to drive the market expansion in the Asia Pacific region.
Inertial motion capture jackets are witnessing growing popularity due to their ease of use and portability. These jackets use accelerometers and gyroscopes to track movements, providing accurate data without the need for external cameras or sensors. The increasing demand for mobile and flexible motion capture solutions in various applications, such as gaming and sports training, is driving the growth of this segment. Inertial motion capture jackets are also preferred for outdoor and large-area applications, where setting up optical systems might be impractical.
Optical motion captu
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Please check the README file for more information about the dataset.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
SingingHead Dataset
Overview
The SingingHead dataset is a large-scale 4D dataset for singing head animation. It contains more than 27 hours of synchronized singing video, 3D facial motion, singing audio, and background music collected from 76 subjects. The video is captured in 30fps and cropped into a resolution of 1024×1024. The 3D facial motion is represented by 59-dimensional FLAME parameters (50 expression + 3 global pose + 3 neck pose + 3 jaw pose). All the data… See the full description on the dataset page: https://huggingface.co/datasets/Human-X/SingingHead.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains time series data of human joint angles collected using MoJoXlab sensor systems during various activities such as walking, jumping, squatting, and leg exercises. The dataset includes data for basic activities such as walking, sitting and relaxing, standing, lying down (supine position), jumping, and squatting. Additionally, it includes data for seated active and assisted knee extension/flexion and heel slide exercises. The data were collected at a sampling frequency of 50 Hz and exported as CSV files from both Xsens (https://www.xsens.com/) and NGIMU (https://x-io.co.uk/ngimu/) software. The joint angles were calculated using MoJoXlab software, which utilizes quaternion values to estimate the orientation of the sensors. The dataset consists of time series data collected from 15 participants using two sensor systems, Xsens and NGIMU, during various lower limb movements. The data were collected at a sampling frequency of 50 Hz and exported as CSV files. The dataset includes quaternion values for orientation, specifically for the left thigh (LT), right thigh (RT), left shank (LS), right shank (RS), left foot (LF), right foot (RF), and pelvis. The sensor positions are recommended by Xsens lower limb protocol. The column headers indicate the orientation (i.e., W, X, Y, Z and q0, q1, q2, q3), and P01_LT denotes participant 1 data for the left thigh sensor position. The dataset is useful for researchers and practitioners interested in studying human movement and developing algorithms for joint angle estimation. The data can be used to compare and validate different sensor systems and algorithms for estimating joint angles and develop and test new algorithms. The data can be downloaded and used for non-commercial research purposes with proper attribution to the authors and the data source.
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Explore the historical Whois records related to motionx-gps.net (Domain). Get insights into ownership history and changes over time.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accompanying data for manuscript “Columnar clusters in the human motion complex reflect consciously perceived motion axis” written by Marian Schneider, Valentin Kemper, Thomas Emmerling, Federico De Martino, Rainer Goebel, submitted, November 2018.
Imaging files
-------------
* T1w and PDw images, only acquired in session 1
* 2 runs task-MotLoc, only acquired in session 2
* 5-6 runs task-ambiguous (called "Experiment 1" in accompanying manuscript, divided across 2 scanning sessions)
* 5-6 runs task-unambiguous (called "Experiment 2" in accompanying manuscript, divided across 2 scanning sessions)
Acquisition details
-------------------
For visualization of the functional results, we acquired scans with structural information in the first scanning session. At high magnetic fields, MR images exhibit high signal intensity variations that result from heterogeneous RF coil profiles. We therefore acquired both T1w images and PDw images using a magnetization-prepared 3D rapid gradient-echo (3D MPRAGE) sequence (TR: 3100 ms (T1w) or 1440 ms (PDw), voxel size = 0.6 mm isotropic, FOV = 230 x 230 mm2, matrix = 384 x 384, slices = 256, TE = 2.52 ms, FA = 5°). Acquisition time was reduced by using 3× GRAPPA parallel imaging and 6/8 Partial Fourier in phase encoding direction (acquisition time (TA): 8 min 49 s (T1w) and 4 min 6 s (PDw)).
To determine our region of interest, we acquired two hMT+ localiser runs. We used a 2D gradient echo (GE) echo planar imaging (EPI) sequence (1.6 mm isotropic nominal resolution; TE/TR = 18/2000 ms; in-plane field of view (FoV) 150×150 mm; matrix size 94 x 94; 28 slices; nominal flip angle (FA) = 69°; echo spacing = 0.71 ms; GRAPPA factor = 2, partial Fourier = 7/8; phase encoding direction head - foot; 240 volumes). We ensured that the area of acquisition had bilateral coverage of the posterior inferior temporal sulci, where we expected the hMT+ areas. Before acquisition of the first functional run, we collected 10 volumes for distortion correction - 5 volumes with the settings specified here and 5 more volumes with identical settings but opposite phase encoding (foot - head), here called "phase1" and "phase2".
For the sub-millimetre measurements (Experiments 1: here called "task-ambiguous" and Experiments 2: here called "task-unambiguous"), we used a 2D GE EPI sequence (TE/TR = 25.6/2000 ms; in-plane FoV 148×148 mm; matrix size 186 x 186; slices = 28; nominal FA = 69°; echo spacing = 1.05 ms; GRAPPA factor = 3, partial Fourier = 6/8; phase encoding direction head - foot; 300 volumes), yielding a nominal resolution of 0.8 mm isotropic. Placement of the small functional slab was guided by online analysis of the hMT+ localizer data recorded immediately at the beginning of the first session. This allowed us to ensure bilateral coverage of area hMT+ for every subject. In the second scanning session, the slab was placed using Siemens auto-align functionality and manual corrections. Before acquisition of the first functional run, we collected 10 volumes for distortion correction (5 volumes with opposite phase encoding: foot - head). During acquisition, runs for the ambiguous and unambiguous motion experiments were interleaved.