100+ datasets found
  1. d

    Data and trained models for: Human-robot facial co-expression

    • search.dataone.org
    • resodate.org
    • +1more
    Updated Jul 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuhang Hu; Boyuan Chen; Jiong Lin; Yunzhe Wang; Yingke Wang; Cameron Mehlman; Hod Lipson (2025). Data and trained models for: Human-robot facial co-expression [Dataset]. http://doi.org/10.5061/dryad.gxd2547t7
    Explore at:
    Dataset updated
    Jul 28, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Yuhang Hu; Boyuan Chen; Jiong Lin; Yunzhe Wang; Yingke Wang; Cameron Mehlman; Hod Lipson
    Description

    Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that they appear natural, timely, and genuine. Here we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial co-expression feels more genuine since it requires correctly inferring the human's emotional state for timely execution. We find that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles, and using a learned inverse kinematic facial self-model, co-express the smile simul..., During the data collection phase, the robot generated symmetrical facial expressions, which we thought can cover most of the situation and could reduce the size of the model. We used an Intel RealSense D435i to capture RGB images and cropped them to 480 320. We logged each motor command value and robot images to form a single data pair without any human labeling., , # Dataset for Paper "Human-Robot Facial Co-expression"

    Overview

    This dataset accompanies the research on human-robot facial co-expression, aiming to enhance nonverbal interaction by training robots to anticipate and simultaneously execute human facial expressions. Our study proposes a method where robots can learn to predict forthcoming human facial expressions and execute them in real time, thereby making the interaction feel more genuine and natural.

    https://doi.org/10.5061/dryad.gxd2547t7

    Description of the data and file structure

    The dataset is organized into several zip files, each containing different components essential for replicating our study's results or for use in related research projects:

    • pred_training_data.zip: Contains the data used for training the predictive model. This dataset is crucial for developing models that predict human facial expressions based on input frames.
    • pred_model.zip: Contains the...
  2. Multi-Modal Humanoid Robot Perception Dataset

    • kaggle.com
    zip
    Updated Sep 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ziya (2025). Multi-Modal Humanoid Robot Perception Dataset [Dataset]. https://www.kaggle.com/datasets/ziya07/multi-modal-humanoid-robot-perception-dataset
    Explore at:
    zip(7854346 bytes)Available download formats
    Dataset updated
    Sep 2, 2025
    Authors
    Ziya
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset provides multi-modal sensory data collected from humanoid robots operating in dynamic logistics environments, such as warehouses and industrial storage areas. It is designed to support research in adaptive control, obstacle avoidance, path planning, and real-time robot navigation using neural networks and advanced algorithms.

    The dataset includes:

    RGB-D Images: Color and depth frames capturing the robot’s surroundings for spatial perception and object recognition.

    LiDAR Scans: Distance measurements and point clouds representing static and dynamic obstacles.

    IMU and Joint Sensor Data: Accelerometer, gyroscope, force/torque, and joint encoder readings for motion analysis and control feedback.

    Planned and Executed Paths: Optimal A* paths and actual robot trajectories for navigation evaluation.

    Obstacle and Object Annotations: Dynamic and static obstacle positions, object labels, and types for supervised learning and testing.

    It enables researchers to simulate real-world logistics scenarios, improve robot autonomy, and enhance the safety and efficiency of humanoid robot operations.

  3. Hand Gestures For Human-Robot Interaction

    • kaggle.com
    zip
    Updated Mar 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joel Baptista (2023). Hand Gestures For Human-Robot Interaction [Dataset]. https://www.kaggle.com/datasets/joelbaptista/hand-gestures-for-human-robot-interaction
    Explore at:
    zip(837173771 bytes)Available download formats
    Dataset updated
    Mar 3, 2023
    Authors
    Joel Baptista
    Description

    This dataset was specifically developed to aid in the study of human-robot interaction. Recognizing the importance of quality training data for effective machine learning models, we designed a dataset featuring 4 static hand gestures against complex backgrounds. By doing so, we aimed to produce models with a high degree of accuracy that would be effective in real-world applications.

    To ensure the dataset's robustness, we split it into two distinct sections. The first part, known as the "train" dataset, consists of approximately 6,000 images captured by a single user. Meanwhile, the "multi_user_test" dataset was recorded by three additional users and features roughly 4,000 images.

    The four gestures included in the dataset are inspired by the static "A," "L," "F," and "Y" signs of American Sign Language. In total, the dataset contains close to 30,000 images captured at 11 FPS using a Kinect-v1 with size 100x100 pixels.

  4. Robot Motion Dataset

    • zenodo.org
    • data.niaid.nih.gov
    Updated Nov 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonio Di Tecco; Antonio Di Tecco; Alessandro Genua; Alessandro Genua (2025). Robot Motion Dataset [Dataset]. http://doi.org/10.5281/zenodo.13893059
    Explore at:
    Dataset updated
    Nov 8, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Antonio Di Tecco; Antonio Di Tecco; Alessandro Genua; Alessandro Genua
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Dec 30, 2024
    Description

    The Robot Motion Dataset contains experimental data from a study investigating human-robot interaction with a leader who used a teleoperated follower robot. So, thirty participants controlled the TFR wearing a virtual reality (VR) headset and using a rudder platform. Thus, the dataset includes sensor data such as accelerometer, gyroscope, trajectory, objects' distance, data questionnaires, and so forth.

    The dataset is used in the work presented in the following articles:

    • Di Tecco, A., Genua, A., Serra, F., Camardella, C., Loconsole, C., Ragusa, E., ... & Frisoli, A. (2025, February). Evaluation of a Haptic-Actuated Glove for Remote Human-Robot Interaction (HRI): A Proof of Concept. In European Robotics Forum (pp. 213-219). Cham: Springer Nature Switzerland. DOI: 10.1007/978-3-031-89471-8_33.
    • Di Tecco, A., Frisoli, A., & Loconsole, C. (2025). Machine Learning Prediction on User Satisfaction in Human-Robot Interaction (HRI) Tasks. IEEE Access. DOI: 10.1109/ACCESS.2025.3597994.
  5. h

    MimicDroidDataset

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shah, MimicDroidDataset [Dataset]. https://huggingface.co/datasets/Rutav/MimicDroidDataset
    Explore at:
    Authors
    Shah
    Description

    MimicDroid: In-Context Learning for Humanoid Robot Manipulation from Human Play Videos

    Paper | Project Page | Code This repository hosts the dataset used in the MimicDroid project. MimicDroid aims to enable humanoid robots to efficiently solve new manipulation tasks from a few video examples. It leverages human play videos—continuous, unlabeled videos of people interacting freely with their environment—as a scalable and diverse training data source for in-context learning (ICL)… See the full description on the dataset page: https://huggingface.co/datasets/Rutav/MimicDroidDataset.

  6. Humanoid Robot Market Analysis, Size, and Forecast 2025-2029: North America...

    • technavio.com
    pdf
    Updated Dec 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Humanoid Robot Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, Italy, and UK), Middle East and Africa (Egypt, KSA, Oman, and UAE), APAC (China, India, and Japan), South America (Argentina and Brazil), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/humanoid-robot-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 24, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Description

    Snapshot img

    Humanoid Robot Market Size 2025-2029

    The humanoid robot market size is valued to increase USD 59.18 billion, at a CAGR of 70.4% from 2024 to 2029. Demand for enhanced visibility and flexibility in industrial operations will drive the humanoid robot market.

    Major Market Trends & Insights

    North America dominated the market and accounted for a 40% growth during the forecast period.
    By Application - Personal assistance and caregiving segment was valued at USD 238.60 billion in 2023
    By Component - Hardware segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 7.00 million
    Market Future Opportunities: USD 59176.50 million
    CAGR : 70.4%
    North America: Largest market in 2023
    

    Market Summary

    The market represents a dynamic and innovative industry, driven by advancements in core technologies and applications. With the increasing demand for enhanced visibility and flexibility in industrial operations, humanoid robots are gaining significant traction. According to recent studies, The market is projected to reach a double-digit adoption rate by 2025, owing to the emergence of smart manufacturing and the growing demand for automation in various industries. However, ethical issues surrounding humanoid robots pose a significant challenge to market growth.
    Regulations and standards are being established to address these concerns, ensuring the safe and responsible integration of humanoid robots into our society. Core technologies, such as artificial intelligence, machine learning, and advanced sensors, continue to evolve, enabling humanoid robots to perform increasingly complex tasks. The market is further segmented into service types, including manufacturing, healthcare, entertainment, and security, among others.
    

    What will be the Size of the Humanoid Robot Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    How is the Humanoid Robot Market Segmented and what are the key trends of market segmentation?

    The humanoid robot industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Application
    
      Personal assistance and caregiving
      Research and space exploration
      Education and entertainment
      Search and rescue
      Public relations
    
    
    Component
    
      Hardware
      Software
    
    
    Motion Type
    
      Biped
      Wheel drive
    
    
    Geography
    
      North America
    
        US
        Canada
    
    
      Europe
    
        France
        Germany
        Italy
        UK
    
    
      Middle East and Africa
    
        Egypt
        KSA
        Oman
        UAE
    
    
      APAC
    
        China
        India
        Japan
    
    
      South America
    
        Argentina
        Brazil
    
    
      Rest of World (ROW)
    

    By Application Insights

    The personal assistance and caregiving segment is estimated to witness significant growth during the forecast period.

    Humanoid robots are revolutionizing personal assistance and caregiving services, with adoption in this sector experiencing a notable increase of 15%. This growth is fueled by the aging population, the rising demand for home care services, and technological advancements in robotics and artificial intelligence. By 2025, it is anticipated that the market for humanoid robots in personal assistance and caregiving will expand by 20%, offering solutions for elderly care, disability support, companionship, and mental health enhancement. These advanced robots integrate sophisticated technologies such as speech recognition, computer vision, deep learning, and impedance control, enabling them to understand and respond to human speech, recognize objects and environments, learn from experience, and mimic human movements.

    Request Free Sample

    The Personal assistance and caregiving segment was valued at USD 238.60 billion in 2019 and showed a gradual increase during the forecast period.

    Moreover, humanoid robots incorporate software architecture, motion planning algorithms, autonomous navigation, haptic feedback, thermal management, sensor fusion, control architectures, anthropomorphic design, actuator systems, emergency stop systems, humanoid gait analysis, dexterous manipulation, reinforcement learning, human-robot interaction, humanoid locomotion, bio-inspired robotics, natural language processing, kinematic modeling, artificial intelligence, reduction gears, collision avoidance, servo motors, robotic limbs, torque control, robotic operating systems, sensors integration, power systems, dynamic balance control, machine learning, safety mechanisms, object recognition, path planning, and force sensors. These technological advancements offer numerous benefits, including improved efficiency, enhanced safety, and increased convenience for individuals in need of assistance.

    Request Free Sample

    Regional Analysis

    North America is

  7. D

    Replication Data for: Exploring Human-Robot Cooperation with Gamified User...

    • dataverse.no
    • search.dataone.org
    pdf +2
    Updated May 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gizem AteĹź VenĂĄs; Gizem AteĹź VenĂĄs (2025). Replication Data for: Exploring Human-Robot Cooperation with Gamified User Training: A User Study on Cooperative Lifting [Dataset]. http://doi.org/10.18710/CZGZVZ
    Explore at:
    text/comma-separated-values(2751110), text/comma-separated-values(2346665), text/comma-separated-values(1064352), text/comma-separated-values(960385), text/comma-separated-values(1745791), text/comma-separated-values(1367650), text/comma-separated-values(1956489), text/comma-separated-values(2800040), text/comma-separated-values(1196185), text/comma-separated-values(2155), text/comma-separated-values(1316922), text/comma-separated-values(1378454), text/comma-separated-values(1465723), text/comma-separated-values(1043231), text/comma-separated-values(1661682), text/comma-separated-values(3539311), text/comma-separated-values(1690969), text/comma-separated-values(2244058), text/comma-separated-values(2147697), text/comma-separated-values(1281060), text/comma-separated-values(3030334), text/comma-separated-values(2143907), text/comma-separated-values(2215850), text/comma-separated-values(1207150), text/comma-separated-values(2359868), text/comma-separated-values(2087329), text/comma-separated-values(2190102), text/comma-separated-values(2687472), text/comma-separated-values(1730738), text/comma-separated-values(1835409), text/comma-separated-values(3886581), text/comma-separated-values(5734141), text/comma-separated-values(1849448), text/comma-separated-values(1524894), text/comma-separated-values(2388088), text/comma-separated-values(1269520), text/comma-separated-values(1357342), text/comma-separated-values(1697378), text/comma-separated-values(1484213), text/comma-separated-values(1242629), text/comma-separated-values(3605189), text/comma-separated-values(1254068), text/comma-separated-values(1796130), text/comma-separated-values(1012982), text/comma-separated-values(1159078), text/comma-separated-values(1853422), text/comma-separated-values(3258649), text/comma-separated-values(3623935), text/comma-separated-values(1883228), text/comma-separated-values(1481403), text/comma-separated-values(3027610), text/comma-separated-values(1361928), text/comma-separated-values(1996771), text/comma-separated-values(1136926), text/comma-separated-values(4701617), text/comma-separated-values(1717807), text/comma-separated-values(4377006), text/comma-separated-values(1395325), text/comma-separated-values(1896286), text/comma-separated-values(1941747), text/comma-separated-values(1076014), text/comma-separated-values(3231505), text/comma-separated-values(3178483), text/comma-separated-values(211908), text/comma-separated-values(6083516), text/comma-separated-values(2138935), text/comma-separated-values(3583471), text/comma-separated-values(854157), text/comma-separated-values(3601779), text/comma-separated-values(1903934), text/comma-separated-values(2771438), text/comma-separated-values(2733924), text/comma-separated-values(1275221), text/comma-separated-values(1863534), text/comma-separated-values(2759687), text/comma-separated-values(1884589), text/comma-separated-values(1475296), text/comma-separated-values(3975685), text/comma-separated-values(1170887), text/comma-separated-values(1502668), text/comma-separated-values(1306530), text/comma-separated-values(2813708), text/comma-separated-values(1064507), text/comma-separated-values(8564533), text/comma-separated-values(1587086), text/comma-separated-values(1083879), text/comma-separated-values(468107), text/comma-separated-values(2421170), text/comma-separated-values(2606185), text/comma-separated-values(1847674), text/comma-separated-values(2837045), text/comma-separated-values(1200117), text/comma-separated-values(1388965), text/comma-separated-values(876753), text/comma-separated-values(1290839), text/comma-separated-values(2675516), text/comma-separated-values(2853319), pdf(405214), text/comma-separated-values(1343459), text/comma-separated-values(2103451), text/comma-separated-values(1053919), text/comma-separated-values(449103), text/comma-separated-values(1371486), text/comma-separated-values(2477903), text/comma-separated-values(1346719), text/comma-separated-values(1611870), text/comma-separated-values(1245985), text/comma-separated-values(1380634), text/comma-separated-values(2456395), text/comma-separated-values(2367479), text/comma-separated-values(3254920), text/comma-separated-values(1111572), text/comma-separated-values(1996057), text/comma-separated-values(3164868), text/comma-separated-values(4012954), text/comma-separated-values(5757860), text/comma-separated-values(2030861), text/comma-separated-values(3736433), text/comma-separated-values(6521525), text/comma-separated-values(1759801), text/comma-separated-values(1399541), text/comma-separated-values(1421495), text/comma-separated-values(1351108), text/comma-separated-values(3323271), text/comma-separated-values(2765374), text/comma-separated-values(20185), text/comma-separated-values(5121107), text/comma-separated-values(1585812), text/comma-separated-values(2180249), text/comma-separated-values(746281), text/comma-separated-values(5667561), text/comma-separated-values(1043907), text/comma-separated-values(2398618), text/comma-separated-values(3277288), text/comma-separated-values(2094380), text/comma-separated-values(4457848), pdf(388300), text/comma-separated-values(242965), text/comma-separated-values(1602865), text/comma-separated-values(1398893), text/comma-separated-values(4647628), text/comma-separated-values(7675205), text/comma-separated-values(1573503), text/comma-separated-values(948015), text/comma-separated-values(1201794), text/comma-separated-values(2575215), text/comma-separated-values(6702742), text/comma-separated-values(1048501), text/comma-separated-values(1667996), text/comma-separated-values(1595382), text/comma-separated-values(6162), text/comma-separated-values(3873108), text/comma-separated-values(725784), text/comma-separated-values(2029869), text/comma-separated-values(1739444), text/comma-separated-values(1416150), text/comma-separated-values(1286201), text/comma-separated-values(433534), text/comma-separated-values(2197061), text/comma-separated-values(2425765), text/comma-separated-values(1472044), text/comma-separated-values(2717815), text/comma-separated-values(1857089), text/comma-separated-values(3010058), text/comma-separated-values(4175268), txt(12507), text/comma-separated-values(1517376), text/comma-separated-values(1782542), text/comma-separated-values(3450936), text/comma-separated-values(3409233), text/comma-separated-values(1253833), text/comma-separated-values(2177244), text/comma-separated-values(2137047), text/comma-separated-values(3488260), text/comma-separated-values(804551), text/comma-separated-values(1655975), text/comma-separated-values(3827719), text/comma-separated-values(6008442), text/comma-separated-values(886616), text/comma-separated-values(6418723), text/comma-separated-values(2678234), text/comma-separated-values(930383), text/comma-separated-values(1667424), text/comma-separated-values(2753890), text/comma-separated-values(1381784), text/comma-separated-values(1209571), text/comma-separated-values(1444613), text/comma-separated-values(2145060), text/comma-separated-values(1679167), text/comma-separated-values(2205488), text/comma-separated-values(3166495), text/comma-separated-values(3874243), text/comma-separated-values(2707910), text/comma-separated-values(1473286), text/comma-separated-values(2084189), text/comma-separated-values(1578615), text/comma-separated-values(973414), text/comma-separated-values(4274587), text/comma-separated-values(1167989), text/comma-separated-values(2451836), text/comma-separated-values(2219465), text/comma-separated-values(1349736), text/comma-separated-values(1695336), text/comma-separated-values(2344958), text/comma-separated-values(2064283), text/comma-separated-values(1771585), text/comma-separated-values(1927339), text/comma-separated-values(1770447), text/comma-separated-values(711770), text/comma-separated-values(1098160), text/comma-separated-values(2392710), text/comma-separated-values(1710244), text/comma-separated-values(1635685), text/comma-separated-values(3840607), text/comma-separated-values(525841), text/comma-separated-values(2247027), text/comma-separated-values(1235886), text/comma-separated-values(2254423), text/comma-separated-values(2249560), text/comma-separated-values(4270826), text/comma-separated-values(4519554), text/comma-separated-values(2259914), text/comma-separated-values(1678757), text/comma-separated-values(1142083), text/comma-separated-values(1199758), text/comma-separated-values(1911997), text/comma-separated-values(2759367), text/comma-separated-values(1783480), text/comma-separated-values(1364953), text/comma-separated-values(11020720), text/comma-separated-values(1783417), text/comma-separated-values(1742857), text/comma-separated-values(2257674), text/comma-separated-values(2385030), text/comma-separated-values(1642450), text/comma-separated-values(1631369), text/comma-separated-values(1636892), text/comma-separated-values(2350136), text/comma-separated-values(2669668), text/comma-separated-values(2494313), text/comma-separated-values(1278111), text/comma-separated-values(1421332), text/comma-separated-values(2679717), text/comma-separated-values(2629511), text/comma-separated-values(1137316), text/comma-separated-values(4202948), text/comma-separated-values(1413495), text/comma-separated-values(426743), text/comma-separated-values(5723839), text/comma-separated-values(1606241), text/comma-separated-values(2466631), text/comma-separated-values(3327595), text/comma-separated-values(1925807), text/comma-separated-values(1176222), text/comma-separated-values(1440345), text/comma-separated-values(2099950), text/comma-separated-values(1687715), text/comma-separated-values(586306), text/comma-separated-values(3016499), text/comma-separated-values(1227110), text/comma-separated-values(1682795), text/comma-separated-values(1200681), text/comma-separated-values(2023857), text/comma-separated-values(4166198)Available download formats
    Dataset updated
    May 21, 2025
    Dataset provided by
    DataverseNO
    Authors
    Gizem AteĹź VenĂĄs; Gizem AteĹź VenĂĄs
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2022 - Nov 1, 2022
    Dataset funded by
    The Research Council of Norway
    Description

    This dataset contains data within the field of human-robot cooperation. It is used in the article named "Exploring Human-Robot Collaboration with Gamified User Training: A Study on Cooperative Lifting" There are two folders which hold two distinct types of data. 1) Experiment Data: This folder holds each user's real-time motion and score data as well as robot motion data in each trial. There are subfolders, each called by the anonymous user ID. CSV format is used for the data. 2) Survey Data: This folder contains the questionnaire that users are provided before and after the experiment in PDF format. It also has the answers in CSV format. Note that some answers that might reveal the user's identity are removed or coated under a category name. For instance, if a user answered "XXX engineer at XXX company", this answer is shown as "IT/Engineering" only.

  8. Dataset for: "Incremental Semiparametric Inverse Dynamics Learning"

    • data.europa.eu
    • data.niaid.nih.gov
    unknown
    Updated Feb 28, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2017). Dataset for: "Incremental Semiparametric Inverse Dynamics Learning" [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-344894?locale=nl
    Explore at:
    unknown(2874)Available download formats
    Dataset updated
    Feb 28, 2017
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset used in the experimental section of the paper: R. Camoriano, S. Traversaro, L. Rosasco, G. Metta and F. Nori, "Incremental semiparametric inverse dynamics learning," 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016, pp. 544-550. doi: 10.1109/ICRA.2016.7487177 Abstract: This paper presents a novel approach for incremental semiparametric inverse dynamics learning. In particular, we consider the mixture of two approaches: Parametric modeling based on rigid body dynamics equations and nonparametric modeling based on incremental kernel methods, with no prior information on the mechanical properties of the system. The result is an incremental semiparametric approach, leveraging the advantages of both the parametric and nonparametric models. We validate the proposed technique learning the dynamics of one arm of the iCub humanoid robot. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7487177&isnumber=7487087 Description The file "iCubDyn_2.0.mat" contains data collected from the right arm of the iCub humanoid robot, considering as input the positions, velocities and accelerations of the 3 shoulder joints and of the elbow joint, and as outputs the 3 force and 3 torque components measured by the six-axis F/T sensor in-built in the upper arm. The dataset is collected at 10Hz at as the end-effector tracks circumferences with 10cm radius on the transverse (XY) and sagittal (XZ) planes (For more information on the iCub reference frames, see [4]) at approximately 0.6 m/s. The total number of points for each dataset is 10000, corresponding to approximately 17 minutes of continuous operation. Trajectories are generated by means of the Cartesian Controller presented in [5]. Input (X) columns 1-4: Joint (3 shoulder joints + 1 elbow joint) positions columns 5-8: Joint (3 shoulder joints + 1 elbow joint) velocities columns 9-12: Joint (3 shoulder joints + 1 elbow joint) accelerations Output (Y) Columns 1-3: Measured forces (N) along the X, Y, Z axes by the force-torque (F/T) sensor placed in the upper arm Columns 4-6: Measured torques (N*m) along the X, Y, Z axes by the force-torque (F/T) sensor placed in the upper arm Preprocessing - Velocities and accelerations are computed by an Adaptive Window Polynomial Fitting Estimator, implemented through a least-squares based algorithm on a adpative window (see [2], [3]). Velocity estimation max window size: 16. Acceleration estimation max window size: 25. - Positions, velocities and accelerations are recorded at 9Hz and oversampled to 20 Hz via cubic spline interpolation. - Forces and torques are directly recorded at 20Hz. This dataset was used in [1] for experimental purposes. See section IV therein for further details. For more information, please contact: Raffaello Camoriano - raffaello.camoriano@iit.it Silvio Traversaro - silvio.traversaro@iit.it References [1] Camoriano, Raffaello; Traversaro, Silvio; Rosasco, Lorenzo; Metta, Giorgio; Nori, Francesco, "Incremental Semiparametric Inverse Dynamics Learning", eprint arXiv:1601.04549, 01/2016 [2] F. Janabi-Sharifi ; Dept. of Mech. Eng., Ryerson Polytech. Univ., Toronto, Ont., Canada ; V. Hayward ; C. -S. J. Chen, "Discrete-time adaptive windowing for velocity estimation", IEEE Transactions on Control Systems Technology, 1003 - 1009, Vol. 8, Issue 6, Nov 2000 [3] https://github.com/robotology/icub-main/blob/master/src/libraries/ctrlLib/include/iCub/ctrl/adaptWinPolyEstimator.h [4] http://wiki.icub.org/wiki/ICubForwardKinematics [5] U. Pattacini; F. Nori; L. Natale; G. Metta; and G. Sandini; "An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots," in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, Oct 2010, pp. 1668–1674.

  9. w

    Global Generative AI in Robotics Market Research Report: By Application...

    • wiseguyreports.com
    Updated Sep 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Global Generative AI in Robotics Market Research Report: By Application (Manufacturing, Healthcare, Agriculture, Logistics, Consumer Robots), By Technology (Natural Language Processing, Computer Vision, Machine Learning, Deep Learning, Simulation), By End Use (Industrial, Commercial, Residential), By Robot Type (Autonomous Mobile Robots, Articulated Robots, Collaborative Robots, Humanoid Robots) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2035 [Dataset]. https://www.wiseguyreports.com/reports/generative-ai-in-robotic-market
    Explore at:
    Dataset updated
    Sep 15, 2025
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Sep 25, 2025
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2023
    REGIONS COVEREDNorth America, Europe, APAC, South America, MEA
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20241.99(USD Billion)
    MARKET SIZE 20252.46(USD Billion)
    MARKET SIZE 203520.0(USD Billion)
    SEGMENTS COVEREDApplication, Technology, End Use, Robot Type, Regional
    COUNTRIES COVEREDUS, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA
    KEY MARKET DYNAMICSRapid technological advancements, Increasing demand for automation, Enhanced decision-making capabilities, Rising investment in AI technologies, Growing applications across industries
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDIBM, General Motors, KUKA, Tesla, NVIDIA, Rockwell Automation, Boston Dynamics, Microsoft, Alphabet, UiPath, Denso, Fanuc, Siemens, ABB, Amazon, Yaskawa
    MARKET FORECAST PERIOD2025 - 2035
    KEY MARKET OPPORTUNITIESAutonomous robotic systems development, Enhanced human-robot collaboration, Customizable AI training solutions, Improved predictive maintenance applications, AI-driven data analysis tools
    COMPOUND ANNUAL GROWTH RATE (CAGR) 23.3% (2025 - 2035)
  10. d

    Data from: Machine learning driven self-discovery of the robot body...

    • datadryad.org
    • data.niaid.nih.gov
    • +3more
    zip
    Updated Dec 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fernando Diaz Ledezma; Sami Haddadin (2023). Machine learning driven self-discovery of the robot body morphology [Dataset]. http://doi.org/10.5061/dryad.h44j0zpsf
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 5, 2023
    Dataset provided by
    Dryad
    Authors
    Fernando Diaz Ledezma; Sami Haddadin
    Time period covered
    Nov 9, 2023
    Description

    Conventionally, the kinematic structure of a robot is assumed to be known and data from external measuring devices are used mainly for calibration. We take an agent-centric perspective to explore whether a robot could learn its body structure by relying on scarce knowledge and depending only on unorganized proprioceptive signals. To achieve this, we analyze a mutual-information-based representation of the relationships between the proprioceptive signals, which we call proprioceptive information graphs (pi-graph), and use it to look for connections that reflect the underlying mechanical topology of the robot. We then use the inferred topology to guide the search for the morphology of the robot; i.e. the location and orientation of its joints. Results from different robots show that the correct topology and morphology can be effectively inferred from their pi-graph, regardless of the number of links and body configuration.

  11. PhysicalAI-Robotics-GR00T-X-Embodiment-Sim

    • huggingface.co
    Updated Mar 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NVIDIA (2025). PhysicalAI-Robotics-GR00T-X-Embodiment-Sim [Dataset]. https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim
    Explore at:
    Dataset updated
    Mar 18, 2025
    Dataset provided by
    Nvidiahttp://nvidia.com/
    Authors
    NVIDIA
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PhysicalAI-Robotics-GR00T-X-Embodiment-Sim

    Github Repo: Isaac GR00T N1 We provide a set of datasets used for post-training of GR00T N1. Each dataset is a collection of trajectories from different robot embodiments and tasks.

      Cross-embodied bimanual manipulation: 9k trajectories
    

    Dataset Name

    trajectories

    bimanual_panda_gripper.Threading 1000

    bimanual_panda_hand.LiftTray 1000

    bimanual_panda_gripper.ThreePieceAssembly 1000… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim.

  12. PE-HRI-temporal: A Multimodal Temporal Dataset in a robot mediated...

    • zenodo.org
    csv
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jauwairia Nasir; Barbara Bruno; Pierre Dillenbourg; Jauwairia Nasir; Barbara Bruno; Pierre Dillenbourg (2024). PE-HRI-temporal: A Multimodal Temporal Dataset in a robot mediated Collaborative Educational Setting [Dataset]. http://doi.org/10.5281/zenodo.13834073
    Explore at:
    csvAvailable download formats
    Dataset updated
    Sep 24, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jauwairia Nasir; Barbara Bruno; Pierre Dillenbourg; Jauwairia Nasir; Barbara Bruno; Pierre Dillenbourg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Sep 24, 2024
    Description

    Please note that this dataset corresponds to the training data used in "Social robots as skilled ignorant peers for supporting learning "[7]. This (second) version of the dataset additionally includes labels (PE score and cluster labels for each datapoint).

    This data set consists of multi-modal temporal team behaviors as well as learning outcomes collected in the context of a robot mediated collaborative and constructivist learning activity called JUSThink [1,2]. The data set can be useful for those looking to explore evolution of log actions, speech behavior, affective states, and gaze patterns for students to model constructs such as engagement, motivation, collaboration, etc. in educational settings.

    In this data set, team level data is collected from 34 teams of two (68 children) where the children are aged between 9 and 12. There are two files:

    PE-HRI_learning_and_performance.csv: This file consists of the team level performance and learning metrics which are defined below:

    • last_error: This is the error of the last submitted solution. Note that if a team has found an optimal solution (error = 0) the game stops, therefore making last error = 0. This is a metric for performance in the task.

    • T_LG_absolute: It is a team-level learning outcome that we calculate by taking the average of the two individual absolute learning gains of the team members. The individual absolute gain is the difference between a participant’s post-test and pre-test score, divided by the maximum score that can be achieved (10), which grasps how much the participant learned of all the knowledge available.

    • T_LG_relative: It is a team-level learning outcome that we calculate by taking the average of the two individual relative learning gains of the team members. The individual relative gain is the difference between a participant’s post-test and pre-test score, divided by the difference between the maximum score that can be achieved and the pre-test score. This grasps how much the participant learned of the knowledge that he/she didn’t possess before the activity.

    • T_LG_joint_abs: It is a team-level learning outcome defined as the difference between the number of questions that both of the team members answer correctly in the post-test and in the pre-test, which grasps the amount of knowledge acquired together by the team members during the activity

    PE-HRI_behavioral_timeseries_w_labels.csv: In this file, for each team, the interaction of around 20-25 minutes is organized in windows of 10 seconds; hence, we have a total of 5048 windows of 10 seconds each. We report team level log actions, speech behavior, affective states, and gaze patterns for each window. More specifically, within each window, 26 features are generated in two ways:

    1. non-incremental
    2. incremental

    A non-incremental type would mean the value of a feature in that particular time window while an incremental type would mean the value of a feature until that particular time window. The incremental type is indicated by an "_inc" at the end of the feature name. Hence, in the end, within each window, we have 52 values:

    • T_add/(_inc): The number of times a team added an edge on the map in that window/(until that window).

    • T_remove/(_inc): The number of times a team removed an edge from the map in that window/(until that window).

    • T_ratio_add_rem/(_inc): The ratio of addition of edges over deletion of edges by a team in that window/(until that window).

    • T_action/(_inc): The total number of actions taken by a team (add, delete, submit, presses on the screen) in that window/(until that window).

    • T_hist/(_inc): The number of times a team opened the sub-window with history of their previous solutions in that window/(until that window).

    • T_help/(_inc): The number of times a team opened the instructions manual in that window/(until that window). Please note that the robot initially gives all the instructions before the game-play while a video is played for demonstration of the functionality of the game.

    • T1_T1_rem/(_inc): The number of times either of the two members in the team followed the pattern consecutively: I add an edge, I then delete it in that window/(until that window).

    • T1_T1_add/(_inc): The number of times either of the two members in the team followed the pattern consecutively: I delete an edge, I add it back in that window/(until that window).

    • T1_T2_rem/(_inc): The number of times the members of the team followed the pattern consecutively: I add an edge, you then delete it in that window/(until that window).

    • T1_T2_add/(_inc): The number of times the members of the team followed the pattern consecutively: I delete an edge, you add it back in that window/(until that window).

    • redundant_exist/(_inc): The number of times the team had redundant edges in their map in that window/(until that window).

    • positive_valence/(_inc): The average value of positive valence for the team in that window/(until that window).

    • negative_valence/(_inc): The average value of negative valence for the team in that window/(until that window).

    • difference_in_valence/(_inc): The difference of the average value of positive and negative valence for the team in that window/(until that window).

    • arousal/(_inc): The average value of arousal for the team in that window/(until that window).

    • gaze_at_partner/(_inc): The average of the the two team member's gaze when looking at their partner in that window/(until that window). Each individual member's gaze is calculated as a percentage of time in that window/(until that window).

    • gaze_at_robot/(_inc): The average of the the two team member's gaze when looking at the robot in that window/(until that window). Each individual member's gaze is calculated as a percentage of time in that window/(until that window).

    • gaze_other/(_inc): The average of the the two team member's gaze when looking in the direction opposite to the robot in that window/(until that window). Each individual member's gaze is calculated as a percentage of time in that window/(until that window).

    • gaze_at_screen_left/(_inc): The average of the the two team member's gaze when looking at the left side of the screen in that window/(until that window). Each individual member's gaze is calculated as a percentage of time in that window/(until that window).

    • gaze_at_screen_right/(_inc): The average of the the two team member's gaze when looking at the right side of the screen in that window/(until that window). Each individual member's gaze is calculated as a percentage of time in that window/(until that window).

    • T_speech_activity/(_inc): The average of the two team member's speech activity in that window/(until that window). Each individual member's speech activity is calculated as a percentage of time that they are speaking in that window/(until that window).

    • T_silence/(_inc): The average of the two team member's silence in that window/(until that window). Each individual member's silence is calculated as a percentage of time in that window/(until that window).

    • T_short_pauses/(_inc): The average of the two team member's short pauses over their speech activity in that window/(until that window). Each individual member's short pause refers to a brief pause of 0.15 seconds and is calculated as a percentage of time in that window/(until that window).

    • T_long_pauses/(_inc): The average of the two team members long pauses over their speech activity in that window/(until that window). Each individual member's long pause refers to a pause of 1.5 seconds and is calculated as a percentage of time in that window/(until that window).

    • T_overlap/(_inc): The average percentage of time the speech of the team members overlaps in that window/(until that window).

    • T_overlap_to_speech_ratio/(_inc): The ratio of the speech overlap over the speech activity of the team in that window/(until that window).

    Apart from these 52 values, within each window, we also indicate:

    • team: The team to which the window belongs to.
    • time_in_secs: Time in seconds until that window.
    • window: The window number.
    • normalized_time: The time when this window occurred with respect to the total duration of the task for a particular team.
    • cluster_labels: The cluster number associated with each time window in reference to the productive and non-productive clusters found in [3]
    • PE_score: The Productive Engagement score in each window

    Lastly, we briefly elaborate on how the features are operationalised. We extract log behaviors from the recorded rosbags while the behaviors related to both gaze and affective states are computed through the open source library OpenFace [6] that returns both facial actions units (AUs) as well as gaze angles. For voice activity detection (VAD), that classifies if a piece of audio is voiced or unvoiced, we made use of the python wrapper for the open source Google WebRTC VAD. The literature that inspired our

  13. f

    DataSheet1_DeepClaw 2.0: A Data Collection Platform for Learning Human...

    • frontiersin.figshare.com
    pdf
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haokun Wang; Xiaobo Liu; Nuofan Qiu; Ning Guo; Fang Wan; Chaoyang Song (2023). DataSheet1_DeepClaw 2.0: A Data Collection Platform for Learning Human Manipulation.PDF [Dataset]. http://doi.org/10.3389/frobt.2022.787291.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Frontiers
    Authors
    Haokun Wang; Xiaobo Liu; Nuofan Qiu; Ning Guo; Fang Wan; Chaoyang Song
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Besides direct interaction, human hands are also skilled at using tools to manipulate objects for typical life and work tasks. This paper proposes DeepClaw 2.0 as a low-cost, open-sourced data collection platform for learning human manipulation. We use an RGB-D camera to visually track the motion and deformation of a pair of soft finger networks on a modified kitchen tong operated by human teachers. These fingers can be easily integrated with robotic grippers to bridge the structural mismatch between humans and robots during learning. The deformation of soft finger networks, which reveals tactile information in contact-rich manipulation, is captured passively. We collected a comprehensive sample dataset involving five human demonstrators in ten manipulation tasks with five trials per task. As a low-cost, open-sourced platform, we also developed an intuitive interface that converts the raw sensor data into state-action data for imitation learning problems. For learning-by-demonstration problems, we further demonstrated our dataset’s potential by using real robotic hardware to collect joint actuation data or using a simulated environment when limited access to the hardware.

  14. f

    Data from: Humanoid robot as an educational assistant – insights of speech...

    • tandf.figshare.com
    xlsx
    Updated Mar 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akshara Pande; Deepti Mishra (2025). Humanoid robot as an educational assistant – insights of speech recognition for online and offline mode of teaching [Dataset]. http://doi.org/10.6084/m9.figshare.25712733.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Mar 11, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Akshara Pande; Deepti Mishra
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Technology has the potential to enhance the effectiveness of the teaching and learning process. With the integration of technological resources, educators can create dynamic and interactive learning environments that offer diverse learning methods. With the help of these resources, students may be able to understand any topic deeply. Incorporating humanoid robots provides a valuable approach that combines the benefits of technology with the personal touch of human interaction. The role of speech is important in education; students might face challenges due to accent and auditory problems. The display of the text on the robot's screen can be beneficial for students to understand the speech better. In the present study, our objective is to integrate speech transcription with the humanoid robot Pepper and to explore its performance as an educational assistant in online and offline modes of teaching. The findings of this study suggest that Pepper's speech recognition system is a suitable candidate for both modes of teaching, regardless of the participant's gender. We expect that integrating humanoid robots into education may lead to more adaptive and efficient teaching and learning, resulting in improved learning outcomes and a richer educational experience.

  15. d

    Real-world human-robot interaction data with robotic pets in user homes in...

    • search.dataone.org
    • dataone.org
    • +2more
    Updated Jan 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Casey Bennett; Selma Sabanovic; Cedomir Stanojevic; Jennifer Piatt; Zachary Henkel; Kenna Baugus; Cindy Bethel; Seongcheol Kim; Jinjae Lee (2024). Real-world human-robot interaction data with robotic pets in user homes in the United States and South Korea [Dataset]. http://doi.org/10.5061/dryad.tb2rbp078
    Explore at:
    Dataset updated
    Jan 3, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Casey Bennett; Selma Sabanovic; Cedomir Stanojevic; Jennifer Piatt; Zachary Henkel; Kenna Baugus; Cindy Bethel; Seongcheol Kim; Jinjae Lee
    Time period covered
    Oct 17, 2023
    Description

    Socially-assistive robots (SARs) hold significant potential to transform the management of chronic healthcare conditions (e.g. diabetes, Alzheimer’s, dementia) outside the clinic walls. However doing so entails embedding such autonomous robots into people’s daily lives and home living environments, which are deeply shaped by the cultural and geographic locations within which they are situated. That begs the question of whether we can design autonomous interactive behaviors between SARs and humans based on universal machine learning (ML) and deep learning (DL) models of robotic sensor data that would work across such diverse environments. To investigate this, we conducted a long-term user study with 26 participants across two diverse locations (the United States and South Korea) with SARs deployed in each user’s home for several weeks. We collected robotic sensor data every second of every day, combined with sophisticated ecological momentary assessment (EMA) sampling techniques, to gene..., Data was collected from robot sensors during deployment of the robots in user homes over a period of 3 weeks, using a sampling technique called ecological momentary assessment (EMA) in order to generate realistic real-word interaction data., , # Real-World Human-Robot Interaction Data with Robotic Pets in User Homes in the United States and South Korea

    The study included 26 participants, 13 from South Korea and 13 from the United States. The participants were drawn from the general population aged 20-35 and living alone, approximately 70% of the sample was female. The robot included sensors that could detect light, sound, movement, indoor air quality, and other environmental health data in the vicinity of the robot (please refer to associated published papers for details). While sensor data was collected via the collars, self-reported interaction behavior modalities were collected simultaneously using the Expiwell EMA mobile app ().

    Description of the data and file structure

    Sensor data from the robot ("feature" data) was collected roughly 9 times per second, every minute of every day, across the three- week deployment period.Ă‚ Ă‚ Meanwhile, the interaction modality data ("target" data) was collected via the EMA app rand...

  16. G

    Few-Shot Robot Learning Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Few-Shot Robot Learning Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/few-shot-robot-learning-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 6, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Few-Shot Robot Learning Market Outlook



    According to our latest research, the global Few-Shot Robot Learning market size reached USD 1.42 billion in 2024 and is projected to grow at a robust CAGR of 27.5% from 2025 to 2033. By the end of the forecast period, the market is expected to attain a value of USD 13.37 billion. This impressive growth trajectory is driven by the surging adoption of artificial intelligence and machine learning in robotics, the increasing demand for flexible automation across industries, and the rapid evolution of robotic capabilities enabled by few-shot learning methodologies.




    The growth of the Few-Shot Robot Learning market is significantly fueled by the need for robots to adapt quickly to new tasks with minimal data. Traditional machine learning models require vast datasets and extensive training, which is both time-consuming and resource-intensive. However, few-shot learning techniques enable robots to learn new skills or adapt to new environments with only a handful of examples. This is particularly valuable in dynamic industrial settings where tasks and operating conditions frequently change, reducing downtime and enhancing operational efficiency. The increasing integration of AI-driven robotics in sectors such as manufacturing, healthcare, and logistics is pushing organizations to seek more agile and intelligent automation solutions, further propelling market growth.




    Another key driver is the rapid advancement in computational power and sensor technologies, which has enabled more sophisticated few-shot learning algorithms to be deployed on robotic platforms. Enhanced hardware capabilities, such as advanced processors and high-resolution cameras, allow robots to process and interpret complex data in real-time. When combined with robust software frameworks, these advancements facilitate the seamless implementation of few-shot learning in diverse applications, from industrial automation to collaborative robotics. Additionally, the proliferation of cloud computing and edge AI has made it easier for enterprises to deploy and scale few-shot learning solutions, lowering barriers to entry and accelerating adoption across various industries.




    The increasing focus on human-robot collaboration and the need for robots to safely and efficiently interact with humans are also major growth factors for the Few-Shot Robot Learning market. As robots are deployed in more human-centric environments, such as healthcare facilities and collaborative manufacturing spaces, the ability to quickly learn and adapt to new tasks becomes critical. Few-shot learning empowers robots to generalize from limited data, enabling them to better understand and respond to human behavior and intent. This not only improves safety and productivity but also opens up new possibilities for robots to assist in complex, unstructured environments where traditional programming falls short.




    Regionally, North America leads the global market, accounting for the largest share in 2024, followed by Europe and Asia Pacific. The presence of leading technology companies, strong research and development ecosystems, and early adoption of advanced robotics solutions contribute to North America's dominance. Meanwhile, Asia Pacific is witnessing the fastest growth rate, driven by rapid industrialization, increasing investments in automation, and the emergence of smart manufacturing hubs in countries like China, Japan, and South Korea. Europe remains a key market, supported by robust automotive and aerospace sectors, as well as strategic government initiatives to promote AI and robotics innovation.





    Component Analysis



    The Few-Shot Robot Learning market by component is segmented into software, hardware, and services, each playing a pivotal role in the overall ecosystem. Software forms the backbone of few-shot learning, encompassing machine learning frameworks, data processing tools, and algorithm libraries that enable robots to interpret and learn from limited data sets. The gro

  17. AI-Powered Humanoid Robots Market Analysis, Size, and Forecast 2025-2029:...

    • technavio.com
    pdf
    Updated Aug 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). AI-Powered Humanoid Robots Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, and UK), APAC (Australia, China, India, Japan, and South Korea), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/ai-powered-humanoid-robots-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 15, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Description

    Snapshot img

    AI-Powered Humanoid Robots Market Size 2025-2029

    The AI-powered humanoid robots market size is valued to increase by USD 3.72 billion, at a CAGR of 35.4% from 2024 to 2029. Unprecedented advances in AI and ML will drive the AI-powered humanoid robots market.

    Market Insights

    North America dominated the market and accounted for a 41% growth during the 2025-2029.
    By Component - Hardware segment was valued at USD 219.80 billion in 2023
    By Type - Wheel driven robots segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 1.00 million 
    Market Future Opportunities 2024: USD 3715.80 million
    CAGR from 2024 to 2029 : 35.4%
    

    Market Summary

    The market represents a pivotal intersection of artificial intelligence (AI) and robotics, showcasing unprecedented advancements in technology. This convergence is driven by the growing need for automation, increased efficiency, and enhanced productivity across industries. Humanoid robots, equipped with advanced AI and machine learning capabilities, are increasingly being adopted for various applications, including manufacturing, healthcare, education, and entertainment. One real-world business scenario that exemplifies the potential of AI-powered humanoid robots is supply chain optimization. In a global manufacturing company, humanoid robots are employed to streamline warehouse operations, ensuring efficient order fulfillment and inventory management. These robots can learn and adapt to their environment, optimizing their movements and tasks to minimize errors and increase productivity.
    Furthermore, they can work alongside human workers, enhancing safety and reducing labor costs. However, the market faces challenges, such as the high cost of development and implementation, as well as unproven return on investment. Despite these hurdles, the potential benefits of AI-powered humanoid robots are significant, making them an intriguing and promising area of investment and innovation for businesses worldwide.
    

    What will be the size of the AI-Powered Humanoid Robots Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    The market continues to evolve, driven by advancements in artificial intelligence, robotics, and electrical engineering. One notable trend is the increasing adoption of teleoperation systems, enabling remote control interfaces and cloud robotics for enhanced task automation and system integration. In the realm of industrial robotics, companies are integrating advanced robotics, such as motor control systems, robot programming languages, and data processing pipelines, to streamline manufacturing processes and improve productivity.
    Simulation environments and augmented reality systems are also gaining traction, enabling more efficient design and testing phases. Moreover, social robotics and collaborative robots are increasingly being adopted for assistive purposes, improving human-robot interaction and enhancing safety in various industries. Robot cognitive abilities, such as perception and reasoning, are advancing rapidly, enabling robots to perform complex tasks and make decisions autonomously. These developments have significant implications for business strategy, particularly in areas such as compliance and budgeting. Companies must stay informed about the latest advancements and trends to remain competitive and optimize their operations. By investing in AI-powered humanoid robots, businesses can achieve substantial improvements in efficiency, productivity, and overall performance.
    

    Unpacking the AI-Powered Humanoid Robots Market Landscape

    In the realm of advanced automation, AI-powered humanoid robots are revolutionizing industries with their autonomous navigation, haptic feedback systems, and dynamic control systems. These robots, equipped with computer vision systems, enable businesses to achieve up to 30% improvement in production efficiency and 25% reduction in errors. Ethical considerations are addressed through reinforcement learning and bio-inspired robotics, ensuring compliance with industry standards. Robotic manipulation skills, facilitated by actuator control systems, emotional AI models, and human-robot interaction, enhance operational lifespan and productivity. Lidar sensor integration, anthropomorphic design, safety protocols, and robotic arm design contribute to seamless humanoid robot design and maintenance protocols. Deep learning frameworks, natural language processing, facial expression recognition, and SLAM algorithms enable advanced AI-powered dexterity and force feedback sensors, while object recognition models and sensor fusion techniques optimize humanoid robot locomotion. Machine learning algorithms and power efficiency metrics ensure optimal performance and cost savings.

    Key Market Drivers Fueling Growth

    The unprecedented advancements in Artificial Inte

  18. D

    Mobile Robot Synthetic Data Generation Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Mobile Robot Synthetic Data Generation Market Research Report 2033 [Dataset]. https://dataintelo.com/report/mobile-robot-synthetic-data-generation-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Mobile Robot Synthetic Data Generation Market Outlook




    According to our latest research, the global Mobile Robot Synthetic Data Generation market size was valued at USD 1.21 billion in 2024, and is expected to reach USD 8.36 billion by 2033, exhibiting a compound annual growth rate (CAGR) of 23.9% during the forecast period. The primary growth driver for this market is the increasing adoption of mobile robots across various industries, which has created an urgent need for large-scale, high-quality synthetic datasets to train and validate artificial intelligence (AI) and machine learning (ML) models. As per our latest research, the surge in demand for robust and accurate perception systems in autonomous robots is fueling the expansion of synthetic data generation solutions globally.




    A significant growth factor for the Mobile Robot Synthetic Data Generation market is the rapid advancement in AI and ML algorithms, which require voluminous and diverse datasets for effective training. Real-world data collection for mobile robots is often expensive, time-consuming, and limited by privacy concerns, especially in sectors like healthcare and defense. Synthetic data generation addresses these challenges by enabling the creation of photo-realistic, scalable, and customizable datasets that mimic real-world environments and scenarios. This allows developers to simulate rare or hazardous events, thus enhancing the robustness and safety of mobile robot navigation, object detection, and decision-making capabilities. The proliferation of simulation platforms and 3D modeling tools further accelerates the adoption of synthetic data solutions, as companies seek to reduce development cycles and improve the reliability of their robotic systems.




    Another major driver is the growing deployment of mobile robots in logistics, manufacturing, and agriculture, where robots must operate in dynamic, unstructured environments. The complexity and variability of these operational contexts necessitate advanced perception and localization capabilities, which can be effectively developed using synthetic data. In logistics and warehousing, for instance, synthetic data enables the modeling of diverse warehouse layouts, object types, and human-robot interactions—scenarios that are difficult to capture comprehensively in real-world datasets. Similarly, in agriculture, synthetic data generation can simulate varying crop conditions, weather scenarios, and terrain types, supporting the development of autonomous robots capable of precision farming. The scalability and flexibility of synthetic data generation are thus instrumental in meeting the evolving requirements of mobile robot applications across industries.




    The increasing integration of synthetic data generation with cloud-based platforms and digital twin technologies is also propelling market growth. Cloud deployment offers scalability, accessibility, and cost-effectiveness, making it easier for organizations to generate and manage large volumes of synthetic data. Digital twins, which are virtual replicas of physical environments, enable the creation of highly realistic training datasets for mobile robots, facilitating iterative testing and rapid prototyping. These technological advancements are driving the adoption of synthetic data generation solutions, particularly among small and medium-sized enterprises (SMEs) that may lack the resources for extensive real-world data collection. As a result, the market is witnessing a democratization of AI-driven robotics development, further accelerating innovation and market expansion.




    From a regional perspective, North America currently holds the largest share of the Mobile Robot Synthetic Data Generation market, driven by significant investments in robotics R&D, the presence of leading technology companies, and strong demand from sectors such as logistics, defense, and healthcare. Europe follows closely, with robust government support for AI research and widespread adoption of automation in manufacturing. The Asia Pacific region is expected to witness the fastest growth during the forecast period, fueled by rapid industrialization, increasing adoption of robotics in agriculture and manufacturing, and the expansion of technology hubs in countries like China, Japan, and South Korea. These regional trends underscore the global nature of the market and highlight the diverse opportunities for growth and innovation across different geographies.



    C

  19. S

    Data from: Kaiwu Dataset

    • scidb.cn
    Updated Oct 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiang Shuo; Li Haonan; Ren Ruochen; Zhou Yanmin; Wang Zhipeng; He Bin (2024). Kaiwu Dataset [Dataset]. http://doi.org/10.57760/sciencedb.14937
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 16, 2024
    Dataset provided by
    Science Data Bank
    Authors
    Jiang Shuo; Li Haonan; Ren Ruochen; Zhou Yanmin; Wang Zhipeng; He Bin
    Description

    The dataset first provides an integration of human, environment and robot data collection framework with 20 subjects and 30 interaction objects resulting in totally 11,664 instances of integrated actions. For each of the demonstration, hand motions, operation pressures, sounds of the assembling process, multi-view videos, high-precision motion capture information, eye gaze with first-person videos, electromyography signals are all recorded. Fine-grained multi-level annotation based on absolute timestamp, and semantic segmentation labelling are performed. Kaiwu dataset aims to facilitate robot learning, dexterous manipulation, human intention investigation and human-robot collaboration research.

  20. G

    Robot Learning from Demonstration Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Robot Learning from Demonstration Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/robot-learning-from-demonstration-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Robot Learning from Demonstration Market Outlook



    According to our latest research, the global Robot Learning from Demonstration market size reached USD 1.42 billion in 2024, registering a robust growth trajectory. The market is projected to expand at a CAGR of 22.7% from 2025 to 2033, culminating in a forecasted market size of USD 6.25 billion by 2033. This remarkable growth is primarily fueled by the increasing adoption of automation, advancements in artificial intelligence, and the rising need for flexible robotic solutions across diverse industries.



    The proliferation of Industry 4.0 and the shift towards smart manufacturing are key drivers propelling the Robot Learning from Demonstration market. Organizations are increasingly prioritizing operational efficiency, productivity, and safety, leading to a surge in demand for robots capable of learning complex tasks by observing human actions. The ability of robots to quickly adapt to new environments and processes through demonstration-based learning eliminates the need for extensive manual programming, reducing downtime and cost. This trend is particularly evident in sectors such as manufacturing, logistics, and healthcare, where collaborative robots are being deployed to handle intricate tasks, streamline workflows, and improve overall output quality.



    Another significant growth factor for the Robot Learning from Demonstration market is the rapid advancement in machine learning algorithms and sensor technologies. Enhanced perception capabilities, combined with sophisticated learning frameworks such as imitation learning and inverse reinforcement learning, enable robots to interpret and replicate human behaviors with high precision. These technological innovations are paving the way for the integration of robot learning from demonstration in applications ranging from assembly lines and surgical assistance to autonomous vehicles and warehouse automation. The convergence of cloud computing, edge AI, and real-time data analytics further accelerates the deployment and scalability of these solutions across various industry verticals.



    The market is also benefitting from increased investments in research and development by both private and public entities. Governments worldwide are recognizing the transformative potential of robotics and AI, resulting in supportive policies, funding, and collaborative initiatives aimed at fostering innovation. Academic institutions and research organizations are actively contributing to the evolution of robot learning from demonstration, bridging the gap between theoretical advancements and practical implementations. This collaborative ecosystem is fostering the emergence of novel use cases, standardization efforts, and cross-industry partnerships, thereby amplifying the market's growth prospects.



    Regionally, Asia Pacific dominates the Robot Learning from Demonstration market owing to its robust manufacturing base, high adoption of industrial automation, and significant investments in technological infrastructure. North America follows closely, driven by a strong focus on innovation, a mature robotics ecosystem, and the presence of leading technology companies. Europe is witnessing steady growth, supported by government initiatives and a thriving automotive sector. Meanwhile, emerging markets in Latin America and the Middle East & Africa are gradually embracing robot learning from demonstration, leveraging it to enhance productivity and competitiveness in their respective industries.





    Component Analysis



    The Component segment of the Robot Learning from Demonstration market is categorized into software, hardware, and services, each playing a pivotal role in the overall ecosystem. Software solutions form the backbone of robot learning from demonstration, encompassing advanced machine learning algorithms, simulation environments, and data analytics platforms. These software components enable robots to interpret demonstrations, extract relevant features, and generalize learned behaviors to new scenarios. The increasing sophisticatio

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yuhang Hu; Boyuan Chen; Jiong Lin; Yunzhe Wang; Yingke Wang; Cameron Mehlman; Hod Lipson (2025). Data and trained models for: Human-robot facial co-expression [Dataset]. http://doi.org/10.5061/dryad.gxd2547t7

Data and trained models for: Human-robot facial co-expression

Explore at:
Dataset updated
Jul 28, 2025
Dataset provided by
Dryad Digital Repository
Authors
Yuhang Hu; Boyuan Chen; Jiong Lin; Yunzhe Wang; Yingke Wang; Cameron Mehlman; Hod Lipson
Description

Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that they appear natural, timely, and genuine. Here we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial co-expression feels more genuine since it requires correctly inferring the human's emotional state for timely execution. We find that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles, and using a learned inverse kinematic facial self-model, co-express the smile simul..., During the data collection phase, the robot generated symmetrical facial expressions, which we thought can cover most of the situation and could reduce the size of the model. We used an Intel RealSense D435i to capture RGB images and cropped them to 480 320. We logged each motor command value and robot images to form a single data pair without any human labeling., , # Dataset for Paper "Human-Robot Facial Co-expression"

Overview

This dataset accompanies the research on human-robot facial co-expression, aiming to enhance nonverbal interaction by training robots to anticipate and simultaneously execute human facial expressions. Our study proposes a method where robots can learn to predict forthcoming human facial expressions and execute them in real time, thereby making the interaction feel more genuine and natural.

https://doi.org/10.5061/dryad.gxd2547t7

Description of the data and file structure

The dataset is organized into several zip files, each containing different components essential for replicating our study's results or for use in related research projects:

  • pred_training_data.zip: Contains the data used for training the predictive model. This dataset is crucial for developing models that predict human facial expressions based on input frames.
  • pred_model.zip: Contains the...
Search
Clear search
Close search
Google apps
Main menu