100+ datasets found
  1. d

    Data from: Machine learning driven self-discovery of the robot body...

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fernando Diaz Ledezma; Sami Haddadin (2024). Machine learning driven self-discovery of the robot body morphology [Dataset]. http://doi.org/10.5061/dryad.h44j0zpsf
    Explore at:
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Fernando Diaz Ledezma; Sami Haddadin
    Time period covered
    Jan 1, 2023
    Description

    Conventionally, the kinematic structure of a robot is assumed to be known and data from external measuring devices are used mainly for calibration. We take an agent-centric perspective to explore whether a robot could learn its body structure by relying on scarce knowledge and depending only on unorganized proprioceptive signals. To achieve this, we analyze a mutual-information-based representation of the relationships between the proprioceptive signals, which we call proprioceptive information graphs (pi-graph), and use it to look for connections that reflect the underlying mechanical topology of the robot. We then use the inferred topology to guide the search for the morphology of the robot; i.e. the location and orientation of its joints. Results from different robots show that the correct topology and morphology can be effectively inferred from their pi-graph, regardless of the number of links and body configuration., The datasets contain the proprioceptive signals for a robot arm, a hexapod, and a humanoid, including joint position, velocity, torque, body angular and linear velocities, and body angular and linear accelerations. The robot manipulator experiment used simulated robot joint trajectories to generate the proprioceptive signals. These signals were computed using the robot's Denavit-Hartenberg parameters and the Newton-Euler method with artificially added noise. In the physical experiment, joint trajectories were optimized for joint velocity signal entropy, and measurements were obtained directly from encoders, torque sensors, and inertial measurement units (IMU). In the hexapod and humanoid robot experiments, sensor data was collected from a physics simulator (Gazebo 11) using virtual IMU sensors. Filters were applied to handle measurement noise, including low-pass filters for offline estimation and moving average filters for online estimation, emphasizing noise reduction for angular veloc..., , # Machine Learning Driven Self-Discovery of the Robot Body Morphology

    The repository contains:

    • Data sets
    • Links to MATLAB source code

    Requirements

    • MATLAB's Robotics System Toolbox
    • MATLAB's Optimization Toolbox
    • ToolÂboxes for optiÂmization on manifolds and matrices MANOPT
    • Java Information Dynamics Toolkit JIDT

    NOTE: MATLAB 2021b was used.

    Sharing/Access information

    All datasets are also publicly available at Kaggle; these are the corresponding links:

    • Simulated robot manipulator with fixed and moving base here
    • Physical manipulator experiment (fixed base) here
    • Simulated hexapod robot here
    • Simulated humanoid robot [h...
  2. d

    Data and trained models for: Human-robot facial co-expression

    • datadryad.org
    • search.dataone.org
    zip
    Updated Mar 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuhang Hu; Boyuan Chen; Jiong Lin; Yunzhe Wang; Yingke Wang; Cameron Mehlman; Hod Lipson (2024). Data and trained models for: Human-robot facial co-expression [Dataset]. http://doi.org/10.5061/dryad.gxd2547t7
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 5, 2024
    Dataset provided by
    Dryad
    Authors
    Yuhang Hu; Boyuan Chen; Jiong Lin; Yunzhe Wang; Yingke Wang; Cameron Mehlman; Hod Lipson
    Time period covered
    Feb 16, 2024
    Description

    Dataset for Paper "Human-Robot Facial Co-expression"

    Overview

    This dataset accompanies the research on human-robot facial co-expression, aiming to enhance nonverbal interaction by training robots to anticipate and simultaneously execute human facial expressions. Our study proposes a method where robots can learn to predict forthcoming human facial expressions and execute them in real time, thereby making the interaction feel more genuine and natural.

    https://doi.org/10.5061/dryad.gxd2547t7

    Description of the data and file structure

    The dataset is organized into several zip files, each containing different components essential for replicating our study's results or for use in related research projects:

    • pred_training_data.zip: Contains the data used for training the predictive model. This dataset is crucial for developing models that predict human facial expressions based on input frames.
    • pred_model.zip: Contains the...
  3. Data from: HRI30: An Action Recognition Dataset for Industrial Human-Robot...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 25, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Iodice; Francesco Iodice; Elena De Momi; Elena De Momi; Arash Ajoudani; Arash Ajoudani (2022). HRI30: An Action Recognition Dataset for Industrial Human-Robot Interaction [Dataset]. http://doi.org/10.5281/zenodo.5833411
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 25, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Francesco Iodice; Francesco Iodice; Elena De Momi; Elena De Momi; Arash Ajoudani; Arash Ajoudani
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A thorough analysis of the existing human action recognition datasets demonstrates that only a few HRI datasets are available that target real-world applications, all of which are adapted to home settings. Therefore, given the shortage of datasets in industrial tasks, we aim to provide the community with a dataset created in a laboratory setting that includes actions commonly performed within manufacturing and service industries. In addition, the proposed dataset meets the requirements of deep learning algorithms for the development of intelligent learning models for action recognition and imitation in HRI applications.

  4. Robot Motion Dataset

    • zenodo.org
    • data.niaid.nih.gov
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonio Di Tecco; Antonio Di Tecco; Alessandro Genua; Alessandro Genua (2025). Robot Motion Dataset [Dataset]. http://doi.org/10.5281/zenodo.13893059
    Explore at:
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Antonio Di Tecco; Antonio Di Tecco; Alessandro Genua; Alessandro Genua
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Dec 30, 2024
    Description

    The Robot Motion Dataset contains experimental data from a study investigating human-robot interaction with a leader who used a teleoperated follower robot. So, thirty participants controlled the TFR wearing a virtual reality (VR) headset and using rudder platform. Thus, the dataset includes sensor data such as accelerometer, gyroscope, trajectory, objects' distance, data questionnaires, and so forth.

    The dataset was used in the work presented in the following article: ASAP.

  5. d

    Replication Data for: Exploring Human-Robot Cooperation with Gamified User...

    • search.dataone.org
    • dataverse.no
    Updated May 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Venås, Gizem Ateş (2025). Replication Data for: Exploring Human-Robot Cooperation with Gamified User Training: A User Study on Cooperative Lifting [Dataset]. http://doi.org/10.18710/CZGZVZ
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    DataverseNO
    Authors
    Venås, Gizem Ateş
    Time period covered
    Oct 1, 2022 - Nov 1, 2022
    Description

    This dataset contains data within the field of human-robot cooperation. It is used in the article named "Exploring Human-Robot Collaboration with Gamified User Training: A Study on Cooperative Lifting" There are two folders which hold two distinct types of data. 1) Experiment Data: This folder holds each user's real-time motion and score data as well as robot motion data in each trial. There are subfolders, each called by the anonymous user ID. CSV format is used for the data. 2) Survey Data: This folder contains the questionnaire that users are provided before and after the experiment in PDF format. It also has the answers in CSV format. Note that some answers that might reveal the user's identity are removed or coated under a category name. For instance, if a user answered "XXX engineer at XXX company", this answer is shown as "IT/Engineering" only.

  6. i

    THU-HRIA dataset(Human-Robot Interactive Action Dataset From The Perspective...

    • ieee-dataport.org
    Updated Mar 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jazon Wang (2023). THU-HRIA dataset(Human-Robot Interactive Action Dataset From The Perspective Of Service Robot) [Dataset]. https://ieee-dataport.org/documents/thu-hria-datasethuman-robot-interactive-action-dataset-perspective-service-robot
    Explore at:
    Dataset updated
    Mar 15, 2023
    Authors
    Jazon Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Kinetics series)

  7. e

    Replication Data for: Exploring Human-Robot Cooperation with Gamified User...

    • b2find.eudat.eu
    Updated Jul 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Replication Data for: Exploring Human-Robot Cooperation with Gamified User Training: A User Study on Cooperative Lifting - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/a32d09c8-302f-5a46-9814-437abadc0620
    Explore at:
    Dataset updated
    Jul 23, 2025
    Description

    This dataset contains data within the field of human-robot cooperation. It is used in the article named "Exploring Human-Robot Collaboration with Gamified User Training: A Study on Cooperative Lifting" There are two folders which hold two distinct types of data. 1) Experiment Data: This folder holds each user's real-time motion and score data as well as robot motion data in each trial. There are subfolders, each called by the anonymous user ID. CSV format is used for the data. 2) Survey Data: This folder contains the questionnaire that users are provided before and after the experiment in PDF format. It also has the answers in CSV format. Note that some answers that might reveal the user's identity are removed or coated under a category name. For instance, if a user answered "XXX engineer at XXX company", this answer is shown as "IT/Engineering" only. ROS, Noetic

  8. PE-HRI: A Multimodal Dataset for the study of Productive Engagement in a...

    • zenodo.org
    • explore.openaire.eu
    • +2more
    bin, csv
    Updated Mar 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jauwairia Nasir; Utku Norman; Barbara Bruno; Mohamed Chetouani; Pierre Dillenbourg; Jauwairia Nasir; Utku Norman; Barbara Bruno; Mohamed Chetouani; Pierre Dillenbourg (2022). PE-HRI: A Multimodal Dataset for the study of Productive Engagement in a robot mediated Collaborative Educational Setting [Dataset]. http://doi.org/10.5281/zenodo.4633092
    Explore at:
    bin, csvAvailable download formats
    Dataset updated
    Mar 8, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jauwairia Nasir; Utku Norman; Barbara Bruno; Mohamed Chetouani; Pierre Dillenbourg; Jauwairia Nasir; Utku Norman; Barbara Bruno; Mohamed Chetouani; Pierre Dillenbourg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set consists of engagement related multi-modal team behaviors & learning outcomes collected in the context of a robot mediated collaborative and constructivist learning activity called JUSThink [1,2]. The data set can be useful for those looking to explore/validate theoretical models of engagement. The dataset is inspired by our efforts of critically assessing engagement modelling in educational HRI contexts which eventually lead us to proposing the concept of 'Productive Engagement'. More on this can be found in [3,4,5].

    The JUSThink platform consists of two screens and a QTrobot acting as a guide and a mediator. The platform aims to (1) improve the computational skills of children by imparting intuitive knowledge about minimum-spanning-tree problems and (2) promote collaboration among the team via its design. As an experimental setup for HRI studies, it also serves as a platform for designing and evaluating robot behaviors that are effective for the pedagogical goals or in general HRI problems such as trust, robot perception, engagement, collaboration. The minimum-spanning-tree problem is introduced through a gold mining scenario based on a map of Switzerland, where mountains represent gold mines labelled with Swiss cities names.

    The features in the dataset are grounded and motivated by the engagement literature in HRI and Intelligent Tutoring Systems. The dataset consists of team level data collected from 34 teams of two (68 children) where the children are aged between 9 and 12. More specifically, it contains:

    • PE-HRI:behavioral.cvs: This file consists of team level multi-modal behavioral data namely log data that captures interaction with the setup, speech behavior, affective states, and gaze patterns. The definition for the each feature is given below:

      • T_add: The number of times a team added an edge on the map.

      • T_remove: The number of times a team removed an edge from the map.

      • T_ratio_add_del: The ratio of addition of edges over deletion of edges by a team.

      • T_action: The total number of actions taken by a team (add, delete, submit, presses on the screen).

      • T_hist: The number of times a team opened the sub-window with history of their previous solutions.

      • T_help: The number of times a team opened the instructions manual. Please note that the robot initially gives all the instructions before the game-play while a video is played for demonstration of the functionality of the game.

      • T1_T1_rem: The number of times a team, either member, followed the pattern consecutively: I add an edge, I then delete it.

      • T1_T1_add: The number of times a team, either member, followed the pattern consecutively: I delete an edge, I add it back.

      • T1_T2_rem: The number of times a team, either member, followed the pattern consecutively: I add an edge, you then delete it.

      • T1_T2_add: The number of times a team, either member, followed the pattern consecutively: I delete an edge, you add it back.

      • redundant_exist: The number of times the team had redundant edges in their map.

      • positive_valence: The average value of positive valence for the team.

      • negative_valence: The average value of negative valence for the team.

      • mean_pos_minus_neg_valence: The difference of the average value of positive and negative valence for the team.

      • arousal: The average value of arousal for the team.

      • smile_count: The average percentage of time of a team smiling.

      • at_partner: The average percentage of time a team has a team member looking at their partner.

      • at_robot: The average percentage of time a team is looking at the robot.

      • other: The average percentage of time a team is looking in the direction opposite to the robot.

      • screen_left: The average percentage of time a team is looking at the left side of the screen.

      • screen_right: The average percentage of time a team is looking at the right side of the screen.

      • screen_right_left_ratio: The ratio of looking at the right side of the screen over the left side.

      • voice_activity: The average percentage of time a team is speaking over the entire duration of the task.

      • silence: The average percentage of time a team is silent over the entire duration of the task.

      • short_pauses: The average percentage of time a team pauses briefly (0.15 sec) over their speech activity.

      • long_pauses: The average percentage of time a team makes long pauses (1.5 sec) over their speech activity.

      • overlap: The average percentage of time the speech of the team members overlaps over the entire duration of the task.

      • overlap_to_speech_ratio: The ratio of the speech overlap over the speech activity (voice_activity) of the team.

    • PE-HRI:learning_and_performance.csv: This file consists of the team level performance and learning metrics which are defined below:

      • last_error: This is the error of the last submitted solution. Note that if a team has found an optimal solution (error = 0) the game stops, therefore making last error = 0. This is a metric for performance in the task.

      • T_LG_absolute: It is a team-level learning outcome that we calculate by taking the average of the two individual absolute learning gains of the team members. The individual absolute gain is the difference between a participant’s post-test and pre-test score, divided by the maximum score that can be achieved (10), which grasps how much the participant learned of all the knowledge available.

      • T_LG_relative: It is a team-level learning outcome that we calculate by taking the average of the two individual relative learning gains of the team members. The individual relative gain is the difference between a participant’s post-test and pre-test score, divided by the difference between the maximum score that can be achieved and the pre-test score. This grasps how much the participant learned of the knowledge that he/she didn’t possess before the activity.

      • T_LG_joint_abs: It is a team-level learning outcome defined as the difference between the number of questions that both of the team members answer correctly in the post-test and in the pre-test, which grasps the amount of knowledge acquired together by the team members during the activity.

    More details on the JUSThink learning activity can be found in the linked identifiers (present in a tab on the right side of the page). Lastly, a temporal version of this data is also available [6].

  9. S

    Data from: Kaiwu

    • scidb.cn
    Updated Oct 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiang Shuo; Li Haonan; Ren Ruochen; Zhou Yanmin; He Bin; Wang Zhipeng (2024). Kaiwu [Dataset]. http://doi.org/10.57760/sciencedb.14901
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 21, 2024
    Dataset provided by
    Science Data Bank
    Authors
    Jiang Shuo; Li Haonan; Ren Ruochen; Zhou Yanmin; He Bin; Wang Zhipeng
    Description

    The dataset first provides an integration of human, environment and robot data collection framework with 20 subjects and 30 interaction objects resulting in totally 11,664 instances of integrated actions. For each of the demonstration, hand motions, operation pressures, sounds of the assembling process, multi-view videos, high-precision motion capture information, eye gaze with first-person videos, electromyography signals are all recorded. Fine-grained multi-level annotation based on absolute timestamp, and semantic segmentation labelling are performed. Kaiwu dataset aims to facilitate robot learning, dexterous manipulation, human intention investigation and human-robot collaboration research.

  10. Z

    Dataset for: "Incremental Semiparametric Inverse Dynamics Learning"

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nori, Francesco (2020). Dataset for: "Incremental Semiparametric Inverse Dynamics Learning" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_344894
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Nori, Francesco
    Metta, Giorgio
    Rosasco, Lorenzo
    Traversaro, Silvio
    Camoriano, Raffaello
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset used in the experimental section of the paper:

    R. Camoriano, S. Traversaro, L. Rosasco, G. Metta and F. Nori, "Incremental semiparametric inverse dynamics learning," 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016, pp. 544-550.

    doi: 10.1109/ICRA.2016.7487177

    Abstract: This paper presents a novel approach for incremental semiparametric inverse dynamics learning. In particular, we consider the mixture of two approaches: Parametric modeling based on rigid body dynamics equations and nonparametric modeling based on incremental kernel methods, with no prior information on the mechanical properties of the system. The result is an incremental semiparametric approach, leveraging the advantages of both the parametric and nonparametric models. We validate the proposed technique learning the dynamics of one arm of the iCub humanoid robot.

    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7487177&isnumber=7487087

    Description

    The file "iCubDyn_2.0.mat" contains data collected from the right arm of the iCub humanoid robot, considering as input the positions, velocities and accelerations of the 3 shoulder joints and of the elbow joint, and as outputs the 3 force and 3 torque components measured by the six-axis F/T sensor in-built in the upper arm.

    The dataset is collected at 10Hz at as the end-effector tracks circumferences with 10cm radius on the transverse (XY) and sagittal (XZ) planes (For more information on the iCub reference frames, see [4]) at approximately 0.6 m/s. The total number of points for each dataset is 10000, corresponding to approximately 17 minutes of continuous operation. Trajectories are generated by means of the Cartesian Controller presented in [5].

    Input (X)
    
    
      columns 1-4: Joint (3 shoulder joints + 1 elbow joint) positions
      columns 5-8: Joint (3 shoulder joints + 1 elbow joint) velocities
      columns 9-12: Joint (3 shoulder joints + 1 elbow joint) accelerations
    
    
    Output (Y)
    
    
      Columns 1-3: Measured forces (N) along the X, Y, Z axes by the force-torque (F/T) sensor placed in the upper arm
      Columns 4-6: Measured torques (N*m) along the X, Y, Z axes by the force-torque (F/T) sensor placed in the upper arm
    

    Preprocessing

    • Velocities and accelerations are computed by an Adaptive Window Polynomial Fitting Estimator, implemented through a least-squares based algorithm on a adpative window (see [2], [3]). Velocity estimation max window size: 16. Acceleration estimation max window size: 25.
    • Positions, velocities and accelerations are recorded at 9Hz and oversampled to 20 Hz via cubic spline interpolation.
    • Forces and torques are directly recorded at 20Hz.

    This dataset was used in [1] for experimental purposes. See section IV therein for further details.

    For more information, please contact: Raffaello Camoriano - raffaello.camoriano@iit.it Silvio Traversaro - silvio.traversaro@iit.it

    References

    [1] Camoriano, Raffaello; Traversaro, Silvio; Rosasco, Lorenzo; Metta, Giorgio; Nori, Francesco, "Incremental Semiparametric Inverse Dynamics Learning", eprint arXiv:1601.04549, 01/2016 [2] F. Janabi-Sharifi ; Dept. of Mech. Eng., Ryerson Polytech. Univ., Toronto, Ont., Canada ; V. Hayward ; C. -S. J. Chen, "Discrete-time adaptive windowing for velocity estimation", IEEE Transactions on Control Systems Technology, 1003 - 1009, Vol. 8, Issue 6, Nov 2000 [3] https://github.com/robotology/icub-main/blob/master/src/libraries/ctrlLib/include/iCub/ctrl/adaptWinPolyEstimator.h [4] http://wiki.icub.org/wiki/ICubForwardKinematics [5] U. Pattacini; F. Nori; L. Natale; G. Metta; and G. Sandini; "An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots," in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, Oct 2010, pp. 1668–1674.

  11. f

    DataSheet1_DeepClaw 2.0: A Data Collection Platform for Learning Human...

    • frontiersin.figshare.com
    pdf
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haokun Wang; Xiaobo Liu; Nuofan Qiu; Ning Guo; Fang Wan; Chaoyang Song (2023). DataSheet1_DeepClaw 2.0: A Data Collection Platform for Learning Human Manipulation.PDF [Dataset]. http://doi.org/10.3389/frobt.2022.787291.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Frontiers
    Authors
    Haokun Wang; Xiaobo Liu; Nuofan Qiu; Ning Guo; Fang Wan; Chaoyang Song
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Besides direct interaction, human hands are also skilled at using tools to manipulate objects for typical life and work tasks. This paper proposes DeepClaw 2.0 as a low-cost, open-sourced data collection platform for learning human manipulation. We use an RGB-D camera to visually track the motion and deformation of a pair of soft finger networks on a modified kitchen tong operated by human teachers. These fingers can be easily integrated with robotic grippers to bridge the structural mismatch between humans and robots during learning. The deformation of soft finger networks, which reveals tactile information in contact-rich manipulation, is captured passively. We collected a comprehensive sample dataset involving five human demonstrators in ten manipulation tasks with five trials per task. As a low-cost, open-sourced platform, we also developed an intuitive interface that converts the raw sensor data into state-action data for imitation learning problems. For learning-by-demonstration problems, we further demonstrated our dataset’s potential by using real robotic hardware to collect joint actuation data or using a simulated environment when limited access to the hardware.

  12. d

    Real-world human-robot interaction data with robotic pets in user homes in...

    • dataone.org
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Casey Bennett; Selma Sabanovic; Cedomir Stanojevic; Jennifer Piatt; Zachary Henkel; Kenna Baugus; Cindy Bethel; Seongcheol Kim; Jinjae Lee (2023). Real-world human-robot interaction data with robotic pets in user homes in the United States and South Korea [Dataset]. http://doi.org/10.5061/dryad.tb2rbp078
    Explore at:
    Dataset updated
    Jul 8, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Casey Bennett; Selma Sabanovic; Cedomir Stanojevic; Jennifer Piatt; Zachary Henkel; Kenna Baugus; Cindy Bethel; Seongcheol Kim; Jinjae Lee
    Time period covered
    Jan 1, 2023
    Area covered
    South Korea
    Description

    Socially-assistive robots (SARs) hold significant potential to transform the management of chronic healthcare conditions (e.g. diabetes, Alzheimer’s, dementia) outside the clinic walls. However doing so entails embedding such autonomous robots into people’s daily lives and home living environments, which are deeply shaped by the cultural and geographic locations within which they are situated. That begs the question of whether we can design autonomous interactive behaviors between SARs and humans based on universal machine learning (ML) and deep learning (DL) models of robotic sensor data that would work across such diverse environments. To investigate this, we conducted a long-term user study with 26 participants across two diverse locations (the United States and South Korea) with SARs deployed in each user’s home for several weeks. We collected robotic sensor data every second of every day, combined with sophisticated ecological momentary assessment (EMA) sampling techniques, to gene...

  13. U

    Dataset for "Deep learning based robot cognitive architecture for...

    • researchdata.bath.ac.uk
    txt, zip
    Updated Apr 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James Male; Uriel Martinez Hernandez (2023). Dataset for "Deep learning based robot cognitive architecture for collaborative assembly tasks" [Dataset]. http://doi.org/10.15125/BATH-01161
    Explore at:
    txt, zipAvailable download formats
    Dataset updated
    Apr 4, 2023
    Dataset provided by
    University of Bath
    Authors
    James Male; Uriel Martinez Hernandez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Engineering and Physical Sciences Research Council
    Description

    Dataset used for training action recognition method and task status prediction method for use in human-robot collaborative assembly scenario. Inertial measurement unit and skeleton tracking data included while users perform assembly tasks and gestures. Dataset has two parts; 1) human action recognition where sensor data is recorded along with ground truth of action being performed at that time 2) task status prediction where sensor data is recorded along with how long until each action in the task is completed.

  14. E

    Educational Robots Industry Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Mar 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Educational Robots Industry Report [Dataset]. https://www.datainsightsmarket.com/reports/educational-robots-industry-14393
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Mar 4, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The educational robotics market is experiencing robust growth, driven by increasing demand for innovative teaching methods and the rising adoption of technology in classrooms globally. With a Compound Annual Growth Rate (CAGR) of 16% from 2019 to 2033, the market is projected to reach substantial size by 2033. Key drivers include the need to enhance student engagement and improve STEM (Science, Technology, Engineering, and Mathematics) education outcomes. Trends such as the integration of AI and personalized learning experiences within educational robots are further fueling this growth. While factors like the high initial cost of implementation and the need for teacher training can pose restraints, the long-term benefits in terms of improved learning outcomes are outweighing these challenges. The market is segmented into humanoid and non-humanoid robots, each catering to specific educational needs and age groups. Leading companies like UBTECH Robotics, SoftBank Robotics, and others are continuously innovating, introducing new features and functionalities to enhance the capabilities and appeal of educational robots. The market is geographically diversified, with significant contributions from North America, Europe, and the Asia-Pacific region, reflecting the global adoption of technological advancements in education. The market's expansion is expected to continue as technological improvements make educational robots more accessible, affordable, and integrated into standard curriculum. The growth of the educational robotics market is fueled by several factors including government initiatives promoting STEM education, the increasing availability of affordable and user-friendly robotic platforms, and a growing awareness among educators about the benefits of integrating robotics into the curriculum. The market is witnessing a shift towards more sophisticated robots with advanced AI capabilities that can personalize learning experiences and adapt to individual student needs. This trend is expected to drive further growth in the coming years. The segment of humanoid robots is expected to witness faster growth due to its ability to provide a more engaging and interactive learning experience compared to non-humanoid robots. Regions like Asia-Pacific are expected to exhibit higher growth rates due to increasing government investments in education and rapid technological advancements. The market is also witnessing increased collaborations between educational institutions, robotics companies, and technology providers to develop comprehensive solutions that seamlessly integrate robots into the learning process. This comprehensive report provides a detailed analysis of the educational robots market, offering invaluable insights into its growth trajectory, key players, and emerging trends. The study period covers 2019-2033, with 2025 serving as the base and estimated year. We project market values in millions, focusing on the forecast period (2025-2033) and leveraging data from the historical period (2019-2024). This report is essential for investors, educators, robotics companies, and anyone seeking a deep understanding of this rapidly evolving sector. Keywords: educational robots market size, educational robotics industry trends, humanoid robots in education, non-humanoid robots education, educational robot market share, educational robots market analysis, robotics in education. Recent developments include: August 2022: Google is focusing on aiding the market's requirements for humanoid robots; researchers in google's lab have developed AI software that incorporates large language models to help humans achieve everyday tasks. Such research is also expected to drive changes in homeschooling in the near future., March 2022: ElliQ, Intuition Robotics' digital care companion was officially commercialized and is presently offered for purchase at elliq.com.. Key drivers for this market are: Technological Advancement in the Field of Robotics, Increasing Demand for Humanoid Robots. Potential restraints include: High Initial R&D Expenditure. Notable trends are: Humanoid Robots Expected to Witness Significant Growth.

  15. h

    PH2D

    • huggingface.co
    Updated Mar 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roger Qiu (2025). PH2D [Dataset]. https://huggingface.co/datasets/RogerQi/PH2D
    Explore at:
    Dataset updated
    Mar 14, 2025
    Authors
    Roger Qiu
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Description

    This dataset contains egocentric human-humanoid data that can be used to co-train manipulation policy for humanoid robot. For paper, visualization, and example code using this dataset, please refer to https://human-as-robot.github.io/ arXiv link: https://arxiv.org/abs/2503.13441

      Task Descriptions
    

    We provide some example dataset config files that we used for our robots here. Data organizations

    Each folder represents one task. Most HDF5 files… See the full description on the dataset page: https://huggingface.co/datasets/RogerQi/PH2D.

  16. f

    Data from: Humanoid robot as an educational assistant – insights of speech...

    • tandf.figshare.com
    xlsx
    Updated Mar 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akshara Pande; Deepti Mishra (2025). Humanoid robot as an educational assistant – insights of speech recognition for online and offline mode of teaching [Dataset]. http://doi.org/10.6084/m9.figshare.25712733.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Mar 11, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Akshara Pande; Deepti Mishra
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Technology has the potential to enhance the effectiveness of the teaching and learning process. With the integration of technological resources, educators can create dynamic and interactive learning environments that offer diverse learning methods. With the help of these resources, students may be able to understand any topic deeply. Incorporating humanoid robots provides a valuable approach that combines the benefits of technology with the personal touch of human interaction. The role of speech is important in education; students might face challenges due to accent and auditory problems. The display of the text on the robot's screen can be beneficial for students to understand the speech better. In the present study, our objective is to integrate speech transcription with the humanoid robot Pepper and to explore its performance as an educational assistant in online and offline modes of teaching. The findings of this study suggest that Pepper's speech recognition system is a suitable candidate for both modes of teaching, regardless of the participant's gender. We expect that integrating humanoid robots into education may lead to more adaptive and efficient teaching and learning, resulting in improved learning outcomes and a richer educational experience.

  17. D

    Bipedal Humanoid Robots Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Bipedal Humanoid Robots Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-bipedal-humanoid-robots-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Sep 23, 2024
    Authors
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bipedal Humanoid Robots Market Outlook



    The global market size for bipedal humanoid robots is projected to grow significantly from an estimated $2.5 billion in 2023 to a forecasted $12.3 billion by 2032, boasting a compound annual growth rate (CAGR) of 19.4%. This tremendous growth can be attributed to advancements in robotics technology, increasing demand for automation across various sectors, and rising investments in research and development.



    One of the primary growth factors driving the bipedal humanoid robots market is the rapid technological advancements in artificial intelligence (AI) and machine learning (ML). These advancements have significantly enhanced the capabilities of humanoid robots, making them more efficient, versatile, and intelligent. For instance, improved AI algorithms allow these robots to perform complex tasks such as recognizing human emotions, making decisions based on data analysis, and learning from their interactions. These capabilities are crucial in applications like healthcare, where robots can assist in patient care, and in education, where they can provide personalized learning experiences.



    Another significant growth driver is the increasing demand for automation in various industries. Industrial sectors are increasingly adopting humanoid robots for tasks that require precision, consistency, and efficiency. These robots are particularly useful in manufacturing settings for assembling products, quality control, and logistics. In the defense sector, bipedal humanoid robots are being employed for reconnaissance missions, bomb disposal, and search and rescue operations. The ability of these robots to navigate complex terrains and environments makes them invaluable assets in scenarios where human safety is a concern.



    Investment in research and development (R&D) is also playing a pivotal role in the market's expansion. Governments, research institutions, and private enterprises are investing heavily in R&D to overcome technical challenges and improve the functionality of humanoid robots. Innovations such as advanced sensor technologies, better battery life, and more robust materials are making these robots more practical and cost-effective. These investments are not only driving technological advancements but are also making humanoid robots more accessible to a broader range of end-users, including commercial enterprises and individuals.



    The regional outlook for the bipedal humanoid robots market indicates robust growth across various geographies. North America is expected to hold a significant share of the market due to substantial investments in technology and a high adoption rate of automation. Asia Pacific is anticipated to witness the fastest growth, driven by countries like Japan and South Korea, which are leaders in robotics technology. Europe also presents lucrative opportunities with its strong focus on industrial automation and innovation in robotics.



    Component Analysis



    The component segment of the bipedal humanoid robots market can be divided into hardware, software, and services. The hardware component includes the physical parts of the robot such as sensors, actuators, and power systems. The software component encompasses the algorithms, machine learning models, and operating systems that control the robot's functions. The services component involves maintenance, repair, and other support services necessary for the robot's optimal performance.



    In the hardware segment, advancements in sensor technology and materials science are playing a crucial role. Sensors are becoming more accurate and sensitive, allowing robots to better perceive their environment. This enhances their ability to navigate, avoid obstacles, and interact with humans safely. Additionally, innovations in materials science are leading to the development of lightweight and durable materials, making robots more efficient in their energy consumption and increasing their operational lifespan.



    The software segment is witnessing rapid growth due to continuous improvements in AI and ML technologies. The development of more sophisticated algorithms enables robots to perform a wider range of tasks with greater accuracy. For example, natural language processing (NLP) allows robots to understand and respond to human speech, making them more interactive and user-friendly. Machine learning models enable robots to learn from experience, improving their performance over time.



    The services segment is also growing, driven by the need for regular maintenance and updates t

  18. emmanuel-senft/data-foodchain: Release to be archived on Zenodo

    • zenodo.org
    • data.europa.eu
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emmanuel Senft; Séverin Lemaignan; Emmanuel Senft; Séverin Lemaignan (2020). emmanuel-senft/data-foodchain: Release to be archived on Zenodo [Dataset]. http://doi.org/10.5281/zenodo.2733196
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Emmanuel Senft; Séverin Lemaignan; Emmanuel Senft; Séverin Lemaignan
    Description

    This folder contains the data, files to analyse data and generated figures for the paper: Teaching robots social autonomy from in-situ human guidance.

    More on the dataset website: https://github.com/emmanuel-senft/data-foodchain

    Please read data/README.md inside the dataset zip file.

  19. H

    Humanoid Service Robots Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Jul 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Humanoid Service Robots Report [Dataset]. https://www.archivemarketresearch.com/reports/humanoid-service-robots-498634
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Jul 5, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global humanoid service robot market is experiencing robust growth, driven by increasing demand across diverse sectors like healthcare, hospitality, and education. Technological advancements in artificial intelligence (AI), machine learning (ML), and sensor technologies are fueling this expansion, enabling robots to perform increasingly complex tasks with greater autonomy and dexterity. While precise market sizing data is unavailable, considering a conservative estimate based on current market trends and the presence of established players like SoftBank Robotics and UBTECH, we can project a 2025 market size of approximately $2 billion. Assuming a Compound Annual Growth Rate (CAGR) of 25% – a figure reflective of the rapid innovation and adoption rates in the robotics sector – the market is poised to reach a significant valuation by 2033. This growth is further amplified by the rising adoption of service robots in various industries to address labor shortages, improve operational efficiency, and enhance customer experience. The market is segmented based on application (healthcare, hospitality, etc.), robot type (size, functionality), and geographic region. While specific regional data is currently unavailable, North America and Europe are anticipated to hold significant market shares, given their advanced technological infrastructure and early adoption of robotics solutions. However, the Asia-Pacific region is expected to witness substantial growth over the forecast period, propelled by increasing investments in research and development and a rising demand for automated services. Despite the promising outlook, certain restraints remain, including high initial investment costs, concerns regarding data privacy and security, and the need for robust regulatory frameworks. Overcoming these challenges will be crucial for sustained market expansion and broader acceptance of humanoid service robots.

  20. H

    Humanoid Educational Robot Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Humanoid Educational Robot Report [Dataset]. https://www.datainsightsmarket.com/reports/humanoid-educational-robot-632499
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jun 1, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global humanoid educational robot market is experiencing significant growth, driven by increasing demand for innovative teaching tools and the rising adoption of robotics in education. While precise market size figures for 2025 aren't provided, considering a conservative CAGR (Compound Annual Growth Rate) of 15% from a hypothetical 2019 market size of $500 million (a reasonable estimate given the emerging nature of this technology), the market value in 2025 would be approximately $1.2 billion. This growth is fueled by several key factors. Firstly, the enhanced engagement and personalized learning experiences offered by humanoid robots are proving highly effective in capturing students' attention and improving educational outcomes across various age groups. Secondly, advancements in artificial intelligence (AI) and machine learning are enabling these robots to adapt to individual learning styles, making education more inclusive and effective. Thirdly, governments and educational institutions are increasingly investing in technology upgrades, recognizing the potential of humanoid robots to address teacher shortages and improve the overall quality of education. However, the market faces certain restraints. High initial investment costs associated with purchasing and maintaining these robots remain a significant barrier for many schools and institutions, especially in developing countries. Furthermore, concerns regarding data privacy and ethical implications of using AI-powered robots in education need to be addressed to ensure responsible adoption. The market is segmented by robot type (e.g., interactive, programmable), age group (primary, secondary, tertiary), and application (STEM education, language learning, special needs education). Key players such as SoftBank Robotics, Ubtech Robotics, and others are actively shaping the market landscape through continuous innovation and product diversification. The forecast period (2025-2033) anticipates continued growth, driven by ongoing technological advancements and increased acceptance of humanoid robots as valuable educational aids.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Fernando Diaz Ledezma; Sami Haddadin (2024). Machine learning driven self-discovery of the robot body morphology [Dataset]. http://doi.org/10.5061/dryad.h44j0zpsf

Data from: Machine learning driven self-discovery of the robot body morphology

Related Article
Explore at:
Dataset updated
Aug 15, 2024
Dataset provided by
Dryad Digital Repository
Authors
Fernando Diaz Ledezma; Sami Haddadin
Time period covered
Jan 1, 2023
Description

Conventionally, the kinematic structure of a robot is assumed to be known and data from external measuring devices are used mainly for calibration. We take an agent-centric perspective to explore whether a robot could learn its body structure by relying on scarce knowledge and depending only on unorganized proprioceptive signals. To achieve this, we analyze a mutual-information-based representation of the relationships between the proprioceptive signals, which we call proprioceptive information graphs (pi-graph), and use it to look for connections that reflect the underlying mechanical topology of the robot. We then use the inferred topology to guide the search for the morphology of the robot; i.e. the location and orientation of its joints. Results from different robots show that the correct topology and morphology can be effectively inferred from their pi-graph, regardless of the number of links and body configuration., The datasets contain the proprioceptive signals for a robot arm, a hexapod, and a humanoid, including joint position, velocity, torque, body angular and linear velocities, and body angular and linear accelerations. The robot manipulator experiment used simulated robot joint trajectories to generate the proprioceptive signals. These signals were computed using the robot's Denavit-Hartenberg parameters and the Newton-Euler method with artificially added noise. In the physical experiment, joint trajectories were optimized for joint velocity signal entropy, and measurements were obtained directly from encoders, torque sensors, and inertial measurement units (IMU). In the hexapod and humanoid robot experiments, sensor data was collected from a physics simulator (Gazebo 11) using virtual IMU sensors. Filters were applied to handle measurement noise, including low-pass filters for offline estimation and moving average filters for online estimation, emphasizing noise reduction for angular veloc..., , # Machine Learning Driven Self-Discovery of the Robot Body Morphology

The repository contains:

  • Data sets
  • Links to MATLAB source code

Requirements

  • MATLAB's Robotics System Toolbox
  • MATLAB's Optimization Toolbox
  • ToolÂboxes for optiÂmization on manifolds and matrices MANOPT
  • Java Information Dynamics Toolkit JIDT

NOTE: MATLAB 2021b was used.

Sharing/Access information

All datasets are also publicly available at Kaggle; these are the corresponding links:

  • Simulated robot manipulator with fixed and moving base here
  • Physical manipulator experiment (fixed base) here
  • Simulated hexapod robot here
  • Simulated humanoid robot [h...
Search
Clear search
Close search
Google apps
Main menu