90 datasets found
  1. P

    Motion-X Dataset

    • paperswithcode.com
    Updated May 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jing Lin; Ailing Zeng; Shunlin Lu; Yuanhao Cai; Ruimao Zhang; Haoqian Wang; Lei Zhang (2023). Motion-X Dataset [Dataset]. https://paperswithcode.com/dataset/motion-x
    Explore at:
    Dataset updated
    May 28, 2025
    Authors
    Jing Lin; Ailing Zeng; Shunlin Lu; Yuanhao Cai; Ruimao Zhang; Haoqian Wang; Lei Zhang
    Description

    Motion-X is a large-scale 3D expressive whole-body motion dataset, which comprises 15.6M precise 3D whole-body pose annotations (i.e., SMPL-X) covering 81.1K motion sequences from massive scenes, meanwhile providing corresponding semantic labels and pose descriptions.

  2. P

    Motion-X++ Dataset

    • paperswithcode.com
    Updated Jan 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuhong Zhang; Jing Lin; Ailing Zeng; Guanlin Wu; Shunlin Lu; Yurong Fu; Yuanhao Cai; Ruimao Zhang; Haoqian Wang; Lei Zhang (2025). Motion-X++ Dataset [Dataset]. https://paperswithcode.com/dataset/motion-x-1
    Explore at:
    Dataset updated
    Jan 8, 2025
    Authors
    Yuhong Zhang; Jing Lin; Ailing Zeng; Guanlin Wu; Shunlin Lu; Yurong Fu; Yuanhao Cai; Ruimao Zhang; Haoqian Wang; Lei Zhang
    Description

    In this paper, we introduce Motion-X++, a large-scale multimodal 3D expressive whole-body human motion dataset. Existing motion datasets predominantly capture body-only poses, lacking facial expressions, hand gestures, and fine-grained pose descriptions, and are typically limited to lab settings with manually labeled text descriptions, thereby restricting their scalability. To address this issue, we develop a scalable annotation pipeline that can automatically capture 3D whole-body human motion and comprehensive textural labels from RGB videos and build the Motion-X dataset comprising 81.1K text-motion pairs. Furthermore, we extend Motion-X into Motion-X++ by improving the annotation pipeline, introducing more data modalities, and scaling up the data quantities. Motion-X++ provides 19.5M 3D whole-body pose annotations covering 120.5K motion sequences from massive scenes, 80.8K RGB videos, 45.3K audios, 19.5M frame-level whole-body pose descriptions, and 120.5K sequence-level semantic labels. Comprehensive experiments validate the accuracy of our annotation pipeline and highlight Motion-X++’s significant benefits for generating expressive, precise, and natural motion with paired multimodal labels supporting several downstream tasks, including text-driven whole-body motion generation, audio-driven motion generation, 3D whole-body human mesh recovery, and 2D whole-body keypoints estimation, etc.

  3. f

    IMU-Based Motion Capture Data for Various Walking Tasks

    • figshare.com
    application/x-rar
    Updated Jun 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akram Shojaei; Arash Abbasi Larki; Mehdi Delrobaei; Hanieh Moradi; Yas Vaseghi (2024). IMU-Based Motion Capture Data for Various Walking Tasks [Dataset]. http://doi.org/10.6084/m9.figshare.26090200.v1
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    Jun 24, 2024
    Dataset provided by
    figshare
    Authors
    Akram Shojaei; Arash Abbasi Larki; Mehdi Delrobaei; Hanieh Moradi; Yas Vaseghi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains motion capture data collected from 11 healthy subjects performing various walking tasks using IMU-based sensors. Each subject performed 8 different tasks under the following conditions:1. Normal walk2. Fast walk3. Normal walk while holding a 1 kg weight with the dominant hand4. Fast walk while holding a 1 kg weight with the dominant hand5. Normal walk with a knee brace on one leg6. Fast walk with a knee brace on one leg7. Normal walk with a knee brace while holding a 1 kg weight (a combination of Task 3 and Task 5)8. Fast walking with a knee brace while holding a 1 kg weight (a combination of Task 4 and Task 6)Data CollectionThe data was collected using a commercial IMU-based motion capture system, with 10 modules worn on the following body parts:- Left foot- Right foot- Left shank- Right shank- Left thigh- Right thigh- Left arm- Right arm- Trunk- PelvisEach module recorded the following data along X, Y, and Z axes:- Accelerometer data- Gyroscope data- Magnetometer dataSampling Rate:- The data sampling rate is 4 ms for all subjects except sub_01 and sub_03, where the sampling rate is 6 ms.- In certain rows of the files, there are irregularities in the recorded time. This occurs when the time value reaches 65,535 or multiples of it (e.g., 131,070 and 196,605). This problem is associated with the way time is displayed and does not impact the sample rate.Data StructureThe dataset is organized into the following folders for each subject.Subject folders:- sub_01- sub_02- sub_03- sub_04- sub_05- sub_06- sub_07- sub_08- sub_09- sub_10 (Note: Task 2 is missing from sub_10 folder)- sub_11Task folders within each subject’s folder include:- 1_walking normal- 2_walking fast- 3_weight normal- 4_weight fast- 5_brace normal- 6_brace fast- 7_brace weight normal- 8_brace weight fastEach trial folder contains four CSV files, named according to the trial condition. For example, for the first trial, the files are:- walking normal_Raw.csv- walking normal_Processed.csv- walking normal_Euler.csv- walking normal_JointsKinematics.csvCSV File Descriptions1. Raw: Contains raw sensor data.- Time (ms)- Accelerometer data (X, Y, Z)- Gyroscope data (X, Y, Z)- Magnetometer data (X, Y, Z)2. Processed: Contains preprocessed data.- Time (ms)- Quaternion components (Q0, Q1, Q2, Q3)- Acceleration in IMU coordinate system (X, Y, Z)- Linear acceleration without gravity (X, Y, Z)- Acceleration in the global coordinate system (X, Y, Z)3. Euler: Contains Euler angles.- Time (ms)- Roll, Pitch, and Yaw angles4. Joints Kinematics: Contains joint angle data.- Time (ms)- Abduction-Adduction angle- Internal-External Rotation angle- Flexion-Extension angleColumn LabelsRaw Data File:- Time_LeftFoot, AccX_LeftFoot, AccY_LeftFoot, AccZ_LeftFoot, GyroX_LeftFoot, GyroY_LeftFoot, GyroZ_LeftFoot, MagX_LeftFoot, MagY_LeftFoot, MagZ_LeftFoot, *Processed Data File:- Time_LeftFoot, Q0_LeftFoot, Q1_LeftFoot, Q2_LeftFoot, Q3_LeftFoot, Acc_X_LeftFoot, Acc_Y_LeftFoot, Acc_Z_LeftFoot, Acc_linX_LeftFoot, Acc_linY_LeftFoot, Acc_linZ_LeftFoot, Acc_GlinX_LeftFoot, Acc_GlinY_LeftFoot, Acc_GlinZ_LeftFoot, *Euler Data File:- Time_LeftFoot, Roll_LeftFoot, Pitch_LeftFoot, Yaw_LeftFoot, *Joints Kinematics Data File:- Time_LeftAnkle, Abduction-Adduction_LeftAnkle, Internal-External Rotat_LeftAnkle, Flexion-Extension_LeftAnkle, Additional Notes- This dataset can be used for research in biomechanics, rehabilitation, and human motion analysis.* (Similar pattern for RightFoot, LeftShank, RightShank, LeftThigh, RightThigh, LeftHumerus, RightHumerus, Pelvic, Trunk) (Similar pattern for RightAnkle, LeftKnee, RightKnee, LeftHip, RightHip, LeftShoulder, RightShoulder, Pelvic, Trunk2Ground)

  4. UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated May 20, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mickaël Tits; Mickaël Tits; Sohaib Laraba; Sohaib Laraba; Eric Caulier; Joëlle Tilmanne; Thierry Dutoit; Eric Caulier; Joëlle Tilmanne; Thierry Dutoit (2020). UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures [Dataset]. http://doi.org/10.1016/j.dib.2018.05.088
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 20, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mickaël Tits; Mickaël Tits; Sohaib Laraba; Sohaib Laraba; Eric Caulier; Joëlle Tilmanne; Thierry Dutoit; Eric Caulier; Joëlle Tilmanne; Thierry Dutoit
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Presentation

    UMONS-TAICHI is a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a [0-10] scale. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys, a sophisticated motion capture system of 11 Oqus cameras that tracked 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless sensor that tracked 25 locations of a person’s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this database. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) on a part of the dataset to extract morphology-independent motion features for gesture skill evaluation and presented in: “Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation” (https://doi.org/10.1145/3077981.3078037).

    Processing

    Qualisys

    Qualisys data were processed manually with Qualisys Track Manager.

    Missing data (occluded markers) were then recovered with an automatic recovery method: MocapRecovery.

    Data were annotated for gesture segmentation, using the MotionMachine framework (C++ openFrameworks addon). The code for annotation can be found here. Annotations were saved as ".lab" files (see Download section).

    Kinect

    The Kinect data were recorded with Kinect Studio. Skeleton data were then extracted with Kinect SDK and saved into “.txt” files which contain several lines corresponding to each captured frame. Each line contains one integer number (ms), relative to the moment when the frame was captured, followed by 3 x 25 float numbers corresponding to the 3-dimentional locations of the 25 body joints.

    For more information please visit https://github.com/numediart/UMONS-TAICHI

    PS: All files can be used with the MotionMachine framework. Please use the parser provided in this github repository for kinect (.txt) data.

  5. h

    ALMI-X

    • huggingface.co
    Updated Apr 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Embodied Team at TeleAI (2025). ALMI-X [Dataset]. https://huggingface.co/datasets/TeleEmbodied/ALMI-X
    Explore at:
    Dataset updated
    Apr 22, 2025
    Dataset authored and provided by
    Embodied Team at TeleAI
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    ALMI-X

    🌍website 📊code  📖paper
    
    
    
    
    
    
      Overview
    

    We release a large-scale whole-body motion control dataset - ALMI-X, featuring high-quality episodic trajectories from MuJoCo simulations deployable on real robots, based on our humanoid control policy - ALMI.

      Dataset Instruction
    

    We collect ALMI-X dataset in MuJoCo simulation by running the trained ALMI policy. In this simulation, we combine a diverse range of upper-body motions with omnidirectional lower-body… See the full description on the dataset page: https://huggingface.co/datasets/TeleEmbodied/ALMI-X.

  6. BME-X: Brain MRI enhancement foundation model for motion correction, super...

    • springernature.figshare.com
    xlsx
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yue Sun; Limei Wang; Gang Li; Weili Lin; Li Wang (2024). BME-X: Brain MRI enhancement foundation model for motion correction, super resolution, denoising, harmonization, and downstream tasks [Dataset]. http://doi.org/10.6084/m9.figshare.27221046.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Dec 6, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Yue Sun; Limei Wang; Gang Li; Weili Lin; Li Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Image source data for the manuscript entitled 'BME-X: Brain MRI enhancement foundation model for motion correction, super resolution, denoising, harmonization, and downstream tasks'.

  7. Data set for "Columnar clusters in the human motion complex reflect...

    • zenodo.org
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marian Schneider; Marian Schneider; Valentin Kemper; Thomas Emmerling; Thomas Emmerling; Federico De Martino; Federico De Martino; Rainer Goebel; Rainer Goebel; Valentin Kemper (2020). Data set for "Columnar clusters in the human motion complex reflect consciously perceived motion axis" [Dataset]. http://doi.org/10.5281/zenodo.1489228
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Marian Schneider; Marian Schneider; Valentin Kemper; Thomas Emmerling; Thomas Emmerling; Federico De Martino; Federico De Martino; Rainer Goebel; Rainer Goebel; Valentin Kemper
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accompanying data for manuscript “Columnar clusters in the human motion complex reflect consciously perceived motion axis” written by Marian Schneider, Valentin Kemper, Thomas Emmerling, Federico De Martino, Rainer Goebel, submitted, November 2018.

    Imaging files
    -------------
    * T1w and PDw images, only acquired in session 1
    * 2 runs task-MotLoc, only acquired in session 2
    * 5-6 runs task-ambiguous (called "Experiment 1" in accompanying manuscript, divided across 2 scanning sessions)
    * 5-6 runs task-unambiguous (called "Experiment 2" in accompanying manuscript, divided across 2 scanning sessions)


    Acquisition details
    -------------------
    For visualization of the functional results, we acquired scans with structural information in the first scanning session. At high magnetic fields, MR images exhibit high signal intensity variations that result from heterogeneous RF coil profiles. We therefore acquired both T1w images and PDw images using a magnetization-prepared 3D rapid gradient-echo (3D MPRAGE) sequence (TR: 3100 ms (T1w) or 1440 ms (PDw), voxel size = 0.6 mm isotropic, FOV = 230 x 230 mm2, matrix = 384 x 384, slices = 256, TE = 2.52 ms, FA = 5°). Acquisition time was reduced by using 3× GRAPPA parallel imaging and 6/8 Partial Fourier in phase encoding direction (acquisition time (TA): 8 min 49 s (T1w) and 4 min 6 s (PDw)).

    To determine our region of interest, we acquired two hMT+ localiser runs. We used a 2D gradient echo (GE) echo planar imaging (EPI) sequence (1.6 mm isotropic nominal resolution; TE/TR = 18/2000 ms; in-plane field of view (FoV) 150×150 mm; matrix size 94 x 94; 28 slices; nominal flip angle (FA) = 69°; echo spacing = 0.71 ms; GRAPPA factor = 2, partial Fourier = 7/8; phase encoding direction head - foot; 240 volumes). We ensured that the area of acquisition had bilateral coverage of the posterior inferior temporal sulci, where we expected the hMT+ areas. Before acquisition of the first functional run, we collected 10 volumes for distortion correction - 5 volumes with the settings specified here and 5 more volumes with identical settings but opposite phase encoding (foot - head), here called "phase1" and "phase2".

    For the sub-millimetre measurements (Experiments 1: here called "task-ambiguous" and Experiments 2: here called "task-unambiguous"), we used a 2D GE EPI sequence (TE/TR = 25.6/2000 ms; in-plane FoV 148×148 mm; matrix size 186 x 186; slices = 28; nominal FA = 69°; echo spacing = 1.05 ms; GRAPPA factor = 3, partial Fourier = 6/8; phase encoding direction head - foot; 300 volumes), yielding a nominal resolution of 0.8 mm isotropic. Placement of the small functional slab was guided by online analysis of the hMT+ localizer data recorded immediately at the beginning of the first session. This allowed us to ensure bilateral coverage of area hMT+ for every subject. In the second scanning session, the slab was placed using Siemens auto-align functionality and manual corrections. Before acquisition of the first functional run, we collected 10 volumes for distortion correction (5 volumes with opposite phase encoding: foot - head). During acquisition, runs for the ambiguous and unambiguous motion experiments were interleaved.

  8. h

    Retargeted_AMASS_for_FourierN1

    • huggingface.co
    Updated Apr 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kun Zhao (2025). Retargeted_AMASS_for_FourierN1 [Dataset]. https://huggingface.co/datasets/fleaven/Retargeted_AMASS_for_FourierN1
    Explore at:
    Dataset updated
    Apr 16, 2025
    Authors
    Kun Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Retargeted AMASS for Robotics

      Project Overview
    

    This project aims to retarget motion data from the AMASS dataset to various robot models and open-source the retargeted data to facilitate research and applications in robotics and human-robot interaction. AMASS (Archive of Motion Capture as Surface Shapes) is a high-quality human motion capture dataset, and the SMPL-X model is a powerful tool for generating realistic human motion data. By adapting the motion data from AMASS… See the full description on the dataset page: https://huggingface.co/datasets/fleaven/Retargeted_AMASS_for_FourierN1.

  9. Landfalling Tropical Cyclones (all_LTC) shear x motion

    • zenodo.org
    • gdex.ucar.edu
    • +1more
    csv, xml
    Updated May 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Ahijevych; Stanley B. Trier; David Ahijevych; Stanley B. Trier (2025). Landfalling Tropical Cyclones (all_LTC) shear x motion [Dataset]. http://doi.org/10.5065/88nz-k207
    Explore at:
    xml, csvAvailable download formats
    Dataset updated
    May 27, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    David Ahijevych; Stanley B. Trier; David Ahijevych; Stanley B. Trier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 5, 1995 - Aug 31, 2021
    Description

    850-200 hPa environmental shear and motion vectors of 72 landfalling tropical cyclones presented in Trier et al. 2023.

  10. H

    Replication Data for Estimating Lower Limb Kinematics using a Reduced...

    • dataverse.harvard.edu
    Updated May 25, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luke Sy (2020). Replication Data for Estimating Lower Limb Kinematics using a Reduced Wearable Sensor Count. [Dataset]. http://doi.org/10.7910/DVN/9QDD5J
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 25, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Luke Sy
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    For details on using the data and on available helper codes, refer to https://gait-tech.github.io/gaittoolbox/. The experiment had nine healthy subjects (7 men and 2 women, weight 63.0 ± 6.8 kg, height 1.70 ± 0.06 m, age 24.6 ± 3.9 years old). Our system was compared to two benchmark systems, namely the Vicon and Xsens systems. The Vicon Vantage system consisted of eight cameras covering approximately 4 x 4 m^2 capture area with millimetre accuracy. Vicon data were captured at 100 Hz and processed using Nexus 2.7 software. The Xsens Awinda system consisted of seven MTx units (IMUs). Xsens data were captured at 100 Hz using MT Manager 4.8 and processed using MVN Studio 4.4 software. The Vicon and Xsens recordings were synchronized by having the Xsens Awinda station send a trigger pulse to the Vicon system at the start and stop event of each recording. Each subject had reflective Vicon markers placed according to the Helen-Hayes 16 marker set, seven MTx units attached to the pelvis, thighs, shanks, and feet according to standard Xsens sensor placement, and two MTx units attached near the ankles. Each subject performed the movements listed in the Table below twice (i.e., two trials). The subjects stood still before and after each trial for ten seconds. The experiment was approved by the Human Research Ethics Board of the University of New South Wales (UNSW) with approval number HC180413. TABLE I: TYPES OF MOVEMENTS DONE IN THE VALIDATION EXPERIMENT Duration (s) Static: Stand still (~10s) Walk: Walk straight and back (~30s) Figure of eight: Walk in figures of eight (~60s) Zig-zag: Walk zigzag (~60s) 5-minute walk: Undirected walk, side step, and stand (~300s) Speedskater: Speedskater on the spot (~30s) Jog: Jog straight and return (~30s) Jumping jacks: Jumping jacks on the spot (~30s) High knee: High knee jog straight and return (~30s) TUG: Timed up and go (~30s) neura-sparse.zip contains the main data (cut and aligned). neura-sparse-raw.zip contains the full trial. Known Issues: Subject 10's left and right foot sensors were accidentally swapped. All have been fixed except for the xsens BVH output. from the MVN software. Note that Xsens MVN 2019.0, in theory, has the capability to swap sensors in reconstruction. However, the software crashes whenever I attempt to do so. I could not use a newer Xsens MVN version due to license limitation.

  11. Motion Capture Jacket Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Motion Capture Jacket Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/motion-capture-jacket-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Motion Capture Jacket Market Outlook



    The global motion capture jacket market is poised for significant growth, with the market size projected to increase from $XX billion in 2023 to $XX billion by 2032, at a compound annual growth rate (CAGR) of X%. The rising demand for immersive user experiences in various entertainment and medical applications is a major growth driver for this market. The integration of advanced sensor technologies and the increasing application of motion capture jackets in diverse fields such as sports, military, and biomedical research further catalyze this market expansion.



    The growth of the motion capture jacket market is largely driven by advancements in technology, particularly in sensor accuracy and wireless communication. Over the past few years, motion capture technology has evolved significantly, allowing for more precise and reliable data capture. This improvement in technology enables wider applications and drives adoption across various industries, such as film, gaming, and medical research. The continuous enhancement of motion sensors and the reduction in their costs are expected to fuel the market growth further.



    Another crucial growth factor is the increasing demand for realistic animations and special effects in the entertainment sector. Motion capture jackets are extensively used in the production of films, TV shows, and video games to create lifelike animations and effects. As consumers demand more immersive and high-quality content, the adoption of motion capture technology in the entertainment industry is expected to soar. Additionally, the rising popularity of virtual reality (VR) and augmented reality (AR) experiences is likely to boost the demand for motion capture jackets, as they are essential for creating interactive and engaging VR/AR content.



    The healthcare sector is also contributing to the growth of the motion capture jacket market. Motion capture technology is increasingly being used in medical applications such as biomechanical research, physical rehabilitation, and sports medicine. By analyzing detailed movement data, healthcare professionals can develop more effective treatment plans and improve patient outcomes. The growing awareness of the benefits of motion analysis and the increasing focus on personalized medicine are expected to drive the adoption of motion capture jackets in the healthcare sector.



    Optical Motion Capture Gloves are emerging as a significant innovation in the realm of motion capture technology. These gloves utilize a network of sensors and cameras to accurately capture hand and finger movements, providing an unparalleled level of detail and precision. This technology is particularly beneficial in fields such as animation and virtual reality, where the subtleties of hand gestures can greatly enhance the realism and interactivity of digital environments. The integration of optical motion capture gloves into existing systems allows for more comprehensive motion tracking, offering new possibilities for creative expression and user engagement. As the demand for high-fidelity motion data continues to grow, optical motion capture gloves are expected to play a crucial role in advancing the capabilities of motion capture solutions across various industries.



    Regionally, North America and Europe are the largest markets for motion capture jackets, driven by the presence of major entertainment companies and advanced healthcare infrastructure. However, the Asia Pacific region is expected to witness the highest growth during the forecast period, owing to the rapid adoption of new technologies and the growing entertainment industry. The increasing investments in research and development, along with the rising disposable incomes in emerging economies like India and China, are likely to drive the market expansion in the Asia Pacific region.



    Product Type Analysis



    Inertial motion capture jackets are witnessing growing popularity due to their ease of use and portability. These jackets use accelerometers and gyroscopes to track movements, providing accurate data without the need for external cameras or sensors. The increasing demand for mobile and flexible motion capture solutions in various applications, such as gaming and sports training, is driving the growth of this segment. Inertial motion capture jackets are also preferred for outdoor and large-area applications, where setting up optical systems might be impractical.



    Optical motion captu

  12. h

    MMHead

    • huggingface.co
    Updated Apr 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    human+ (2025). MMHead [Dataset]. https://huggingface.co/datasets/Human-X/MMHead
    Explore at:
    Dataset updated
    Apr 4, 2025
    Dataset authored and provided by
    human+
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    MMHead Dataset

      Overview
    

    The MMHead dataset is a multi-modal 3D facial animation dataset with hierarchical text annotations: (1) abstract action descriptions, (2) abstract emotions descriptions, (3) fine-grained expressions descriptions, (4) fine-grained head pose descriptions, and (5) emotion scenarios. The 3D facial motion is represented by 56-dimensional FLAME parameters (50 expression + 3 neck pose + 3 jaw pose). MMHead dataset contains a total of 35903 facial motions… See the full description on the dataset page: https://huggingface.co/datasets/Human-X/MMHead.

  13. Z

    Data from: Dataset: Indoor Localization with Narrow-band, Ultra-Wideband,...

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Khan, Aftab (2020). Dataset: Indoor Localization with Narrow-band, Ultra-Wideband, and Motion Capture Systems [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3452006
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Stanoev, Aleksandar
    Raza, Usman
    Khan, Aftab
    Thompson, William
    Premalal, Thajanee
    Kou, Roget
    Farnham, Tim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Localization Dataset README

    In this, data for BLE and UWB calibration is included through the use of UWB and OptiTrack Motion capture system respectively. There are two sets of data each covering two scenarios; these being walking and trolley.

    The two groups containing the data are laid out identically as indicated below.

    Bluetooth Low Energy / Ultra-Wideband

    | Session ID | 8 x AoA | 8 x RSSI | BLE x | BLE y | UWB x | UWB y |

    Where :

    • AoA is Angle of Arrival, with two values given for each anchor node.
    • RSSI is the Received Signal Strength Indicator, also with two values given for each anchor node.
    • BLE x and y location estimates [1].
    • UWB x and y location estimates.

    Number of Samples (BLE/UWB)

    The datasets contains the following number of samples:

    • Walk - 4896 samples.
    • Trolley - 4856 samples.

    Ultra-Wideband / OptiTrack Motion Capture

    | Session ID | 4 x CIR | 4 x PSA | Distance | UWB x | UWB y | OPT x | OPT y |

    Where:

    • CIR is Channel Impulse Response for each received signal from the Anchors to the target tag.
    • PSA is the Preamble Symbol Accumulation, from each of the 4 Anchors to the target tag.
    • Distances of the tag to each of the 4 anchors.
    • UWB x and y location estimates.
    • OptiTrack location estimates.

    Number of Samples (UWB/OPT)

    • Walk - 2797 samples.
    • Trolley - 3202 samples.

    Total Number of Samples : 15751

    [1]: A Khan, T Farnham, R Kou, U Raza, T Premalal, A Stanoev, W Thompson, "Standing on the Shoulders of Giants: AI-driven Calibration of Localisation Technologies", IEEE Global Communications Conference (GLOBECOM) 2019

  14. D

    Dual Gantry Motion System Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Apr 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Dual Gantry Motion System Report [Dataset]. https://www.promarketreports.com/reports/dual-gantry-motion-system-129835
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Apr 25, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global dual gantry motion system market is experiencing robust growth, driven by increasing automation across diverse industries. While precise market size figures for 2025 aren't provided, considering typical CAGR values for similar advanced automation technologies of 7-10%, and assuming a 2025 market size in the range of $2 billion, a conservative estimate of the market's value at $2.2 billion is justifiable. This projection reflects strong demand across key application areas such as semiconductor and PCB manufacturing, flat panel display production, and the burgeoning photovoltaic industry. Technological advancements, particularly in precision engineering and control systems, are further fueling market expansion. The increasing adoption of high-speed, high-accuracy motion control systems in automated manufacturing processes is a primary growth driver. Furthermore, the rising demand for compact and efficient systems is pushing innovation in system design and miniaturization. The market is segmented by system type (X/Y, X/Y/Z) and application. The X/Y/Z system segment is projected to witness faster growth owing to the increasing complexity of automated processes requiring three-dimensional motion control. Significant regional variations exist, with North America and Asia-Pacific expected to dominate the market due to their established manufacturing bases and robust technological infrastructure. However, Europe and other regions are anticipated to demonstrate significant growth potential driven by increasing adoption of automation technologies within various industries. While some challenges exist in terms of high initial investment costs and the need for specialized technical expertise, the long-term cost benefits and productivity enhancements associated with dual gantry systems are expected to outweigh these barriers, resulting in continued strong market growth throughout the forecast period. A moderate CAGR of 8% is projected between 2025 and 2033, resulting in substantial market expansion.

  15. Data from: X-ray image reconstruction for continuous acquisitions with a...

    • zenodo.org
    • repository.uantwerpen.be
    zip
    Updated Nov 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ben Huyge; Ben Huyge; Jens Renders; Jens Renders; Joaquim Sanctorum; Joaquim Sanctorum; Jan De Beenhouwer; Jan De Beenhouwer; Jan Sijbers; Jan Sijbers (2024). X-ray image reconstruction for continuous acquisitions with a universal motion model: Data [Dataset]. http://doi.org/10.5281/zenodo.12918504
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 18, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ben Huyge; Ben Huyge; Jens Renders; Jens Renders; Joaquim Sanctorum; Joaquim Sanctorum; Jan De Beenhouwer; Jan De Beenhouwer; Jan Sijbers; Jan Sijbers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains two experimentally measured X-ray scans, one reference scan and one scan in which the object translates while rotating. The reference scan consists of 3600 projection images, with one flat field and dark field image. The roto-translational scan consists of 360 projections, also with a flat field and dark field image. The acquisition files containing all relevant specifications of the scanner and the acquisition settings are also supplied.

    The code for reconstruction is available at GitHub.

  16. M

    Motion Sensor Market Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Motion Sensor Market Report [Dataset]. https://www.marketresearchforecast.com/reports/motion-sensor-market-6292
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Mar 21, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Motion Sensor Marketsize was valued at USD 6.04 USD Billion in 2023 and is projected to reach USD 10.28 USD Billion by 2032, exhibiting a CAGR of 7.9 % during the forecast period. A motion sensor detects physical movement in a specific area, commonly using technologies such as infrared, ultrasonic, microwave, and tomographic sensors. Infrared sensors detect body heat, ultrasonic sensors use sound waves, microwave sensors emit radio waves, and tomographic sensors create a mesh network for motion detection. Key features include sensitivity adjustment, detection range, and energy efficiency. Motion sensors are widely used in security systems, automatic lighting controls, smart home devices, and industrial automation. They enhance safety, energy conservation, and operational efficiency by triggering actions based on detected motion, such as turning on lights, alerting security breaches, or automating processes. Recent developments include: November 2022: TDK Corporation expanded its SmartAutomotive range of motion sensors, offering both ASIL and non-ASIL versions. The company introduced the InvenSense IAM-20380HT, an automotive monolithic 3-axis motion tracking sensor platform designed for non-safety automotive applications. The 20380HT comprises a 3-axis MEMS gyroscope in a compact 3 x 3 x 0.75 mm (16-pin LGA) package that can operate effectively across a wide temperature range. This makes it suitable for various automotive non-safety applications such as navigation and dead reckoning, vehicle tracking, telematics, door control, and vision systems. In addition, TDK provides the DK-20380HT developer kit for this sensor platform., November 2022: ST unveiled the LSM6DSV16X, a leading 6-axis inertial measurement unit (IMU) that incorporates ST's Sensor Fusion Low Power (SFLP) technology, artificial intelligence (AI), and adaptive-self-configuration (ASC) to achieve exceptional power optimization., February 2021: Allterco Robotics introduced a new addition to its Shelly product line, the Shelly Motion, an Internet of Things (IoT) motion sensor. The device is expected to be a valuable addition to smart home designs, offering utility to many users., January 2020: Murata, a globally renowned electronic components manufacturer, inaugurated a new facility in Vantaa, Finland. This expansion provided the existing production and product development unit with an additional space equivalent to one-third of its previous size, resulting in a total area of approximately 16,000 square meters. The investment value for this project amounted to EUR 42 million. The MEMS sensors produced by Murata in Vantaa are crucial components in applications such as automotive safety systems, industrial machinery, and healthcare technology such as pacemakers. Furthermore, the company is at the forefront of developing essential positioning and safety technology for advanced driver-assistance systems (ADAS) and autonomous vehicles., December 2019: Kionix, a subsidiary of the ROHM Group, unveiled its latest accelerometer offerings, the KX132-1211 and KX134-1211. These accelerometers are designed for precise motion sensing with low power consumption, making them well-suited for applications in the industrial equipment and consumer wearable sectors., Top of Form. Key drivers for this market are: Increasing Need For Robust Security Solutions to Drive the Market Growth. Potential restraints include: Increasing Price of MEMS-Based Sensors Due To The Lack Of Alternatives May Hinder the Market Growth. Notable trends are: Increasing Demand for Consumer Electronic Devices across the Gaming Industry to Drive the Market Growth.

  17. T

    Global Motion Capture System Market Segment Outlook, Market Assessment,...

    • the-market.us
    csv, pdf
    Updated Jun 7, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Global Motion Capture System Market Segment Outlook, Market Assessment, Competition Scenario, Trends and Forecast 2019-2028 [Dataset]. https://the-market.us/report/motion-capture-system-market/
    Explore at:
    csv, pdfAvailable download formats
    Dataset updated
    Jun 7, 2019
    License

    https://the-market.us/privacy-policy/https://the-market.us/privacy-policy/

    Time period covered
    2016 - 2022
    Area covered
    Global
    Description

    Table of Contents

    The report on Motion Capture System Market offers in-depth analysis of market trends, drivers, restraints, opportunities etc. Along with qualitative information, this report includes the quantitative analysis of various segments in terms of market share, growth, opportunity analysis, market value, etc. for the forecast years. The global motion capture system market is segmented on the basis of type, application, and geography.

    The Global Motion Capture System market is estimated to be US$ XX.X Mn in 2019 and is projected to increase significantly at a CAGR of x.x% from 2020 to 2028. Read More

  18. t

    ROYTEK&3 X. MOTION TECHNOLOGIES CO|Full export Customs Data...

    • tradeindata.com
    Updated May 28, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tradeindata (2025). ROYTEK&3 X. MOTION TECHNOLOGIES CO|Full export Customs Data Records|tradeindata [Dataset]. https://www.tradeindata.com/supplier_detail/?id=0d84fe72862c7d77e2ae4ca8ec149e42
    Explore at:
    Dataset updated
    May 28, 2025
    Dataset authored and provided by
    tradeindata
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Customs records of are available for ROYTEK&3 X. MOTION TECHNOLOGIES CO. Learn about its Importer, supply capabilities and the countries to which it supplies goods

  19. f

    Auditory chain reaction: Effects of sound pressure and particle motion on...

    • plos.figshare.com
    xlsx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanja Schulz-Mirbach; Friedrich Ladich; Alberto Mittone; Margie Olbinado; Alberto Bravin; Isabelle P. Maiditsch; Roland R. Melzer; Petr Krysl; Martin Heß (2023). Auditory chain reaction: Effects of sound pressure and particle motion on auditory structures in fishes [Dataset]. http://doi.org/10.1371/journal.pone.0230578
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tanja Schulz-Mirbach; Friedrich Ladich; Alberto Mittone; Margie Olbinado; Alberto Bravin; Isabelle P. Maiditsch; Roland R. Melzer; Petr Krysl; Martin Heß
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Despite the diversity in fish auditory structures, it remains elusive how otolith morphology and swim bladder-inner ear (= otophysic) connections affect otolith motion and inner ear stimulation. A recent study visualized sound-induced otolith motion; but tank acoustics revealed a complex mixture of sound pressure and particle motion. To separate sound pressure and sound-induced particle motion, we constructed a transparent standing wave tube-like tank equipped with an inertial shaker at each end while using X-ray phase contrast imaging. Driving the shakers in phase resulted in maximised sound pressure at the tank centre, whereas particle motion was maximised when shakers were driven out of phase (180°). We studied the effects of two types of otophysic connections—i.e. the Weberian apparatus (Carassius auratus) and anterior swim bladder extensions contacting the inner ears (Etroplus canarensis)—on otolith motion when fish were subjected to a 200 Hz stimulus. Saccular otolith motion was more pronounced when the swim bladder walls oscillated under the maximised sound pressure condition. The otolith motion patterns mainly matched the orientation patterns of ciliary bundles on the sensory epithelia. Our setup enabled the characterization of the interplay between the auditory structures and provided first experimental evidence of how different types of otophysic connections affect otolith motion.

  20. n

    Data from: Diffuse X-ray Scattering from Correlated Motions in a Protein...

    • portal.nersc.gov
    • cxidb.org
    Updated Feb 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steve P. Meisburger (2020). Diffuse X-ray Scattering from Correlated Motions in a Protein Crystal [Dataset]. http://doi.org/10.11577/1601281
    Explore at:
    Dataset updated
    Feb 24, 2020
    Authors
    Steve P. Meisburger
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Please check the README file for more information about the dataset.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jing Lin; Ailing Zeng; Shunlin Lu; Yuanhao Cai; Ruimao Zhang; Haoqian Wang; Lei Zhang (2023). Motion-X Dataset [Dataset]. https://paperswithcode.com/dataset/motion-x

Motion-X Dataset

Explore at:
Dataset updated
May 28, 2025
Authors
Jing Lin; Ailing Zeng; Shunlin Lu; Yuanhao Cai; Ruimao Zhang; Haoqian Wang; Lei Zhang
Description

Motion-X is a large-scale 3D expressive whole-body motion dataset, which comprises 15.6M precise 3D whole-body pose annotations (i.e., SMPL-X) covering 81.1K motion sequences from massive scenes, meanwhile providing corresponding semantic labels and pose descriptions.

Search
Clear search
Close search
Google apps
Main menu