90 datasets found
  1. i

    HARSense: Statistical Human Activity Recognition Dataset

    • ieee-dataport.org
    Updated Jul 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nurul Choudhury (2021). HARSense: Statistical Human Activity Recognition Dataset [Dataset]. https://ieee-dataport.org/open-access/harsense-statistical-human-activity-recognition-dataset
    Explore at:
    Dataset updated
    Jul 15, 2021
    Authors
    Nurul Choudhury
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ContextThis dataset consists of subject wise daily living activity data

  2. P

    HAR Dataset

    • paperswithcode.com
    Updated May 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Davide Anguita; Alessandro Ghio; Luca Oneto; Xavier Parra; Jorge Luis Reyes-Ortiz (2023). HAR Dataset [Dataset]. https://paperswithcode.com/dataset/har
    Explore at:
    Dataset updated
    May 17, 2023
    Authors
    Davide Anguita; Alessandro Ghio; Luca Oneto; Xavier Parra; Jorge Luis Reyes-Ortiz
    Description

    The Human Activity Recognition Dataset has been collected from 30 subjects performing six different activities (Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing, Laying). It consists of inertial sensor data that was collected using a smartphone carried by the subjects.

  3. The FORTH-TRACE dataset for human activity recognition of simple activities...

    • zenodo.org
    • explore.openaire.eu
    • +1more
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Katerina Karagiannaki; Athanasia Panousopoulou; Panagiotis Tsakalides; Katerina Karagiannaki; Athanasia Panousopoulou; Panagiotis Tsakalides (2020). The FORTH-TRACE dataset for human activity recognition of simple activities and postural transitions using a Body Area Network [Dataset]. http://doi.org/10.5281/zenodo.841301
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Katerina Karagiannaki; Athanasia Panousopoulou; Panagiotis Tsakalides; Katerina Karagiannaki; Athanasia Panousopoulou; Panagiotis Tsakalides
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The dataset is collected from 15 participants wearing 5 Shimmer wearable sensor nodes on the locations listed in Table 1. The participants performed a series of 16 activities (7 basic and 9 postural transitions), listed in Table 2.

    The captured signals are the following:

    • 3-axis accelerometer
    • 3-axis gyroscope
    • 3-axis magnetometer

    The sampling rate of the devices is set to 51.2 Hz.

    DATASET FILES

    The dataset contains the following files:

    • partX/partXdev1.csv
    • partX/partXdev2.csv
    • partX/partXdev3.csv
    • partX/partXdev4.csv
    • partX/partXdev5.csv

    Where X corresponds to the participant ID, and numbers 1-5 to the device IDs indicated in Table 1.

    Each .csv file has the following format:

    • Column1: Device ID
    • Column2: accelerometer x
    • Column3: accelerometer y
    • Column4: accelerometer z
    • Column5: gyroscope x
    • Column6: gyroscope y
    • Column7: gyroscope z
    • Column8: magnetometer x
    • Column9: magnetometer y
    • Column10: magnetometer z
    • Column11: Timestamp
    • Column12: Activity Label

    Table 1: LOCATIONS

    1. Left Wrist
    2. Right Wrist
    3. Torso
    4. Right Thigh
    5. Left Ankle

    Table 2: ACTIVITY LABELS

    (Arrows (->) indicate transitions between activities)

    1. stand
    2. sit
    3. sit and talk
    4. walk
    5. walk and talk
    6. climb stairs (up/down)
    7. climb stairs (up/down) and talk
    8. stand -> sit
    9. sit -> stand
    10. stand -> sit and talk
    11. sit and talk -> stand
    12. stand -> walk
    13. walk -> stand
    14. stand -> climb stairs (up/down), stand -> climb stairs (up/down) and talk
    15. climb stairs (up/down) -> walk
    16. climb stairs (up/down) and talk -> walk and talk
  4. i

    CSI-BFI-HAR: Wi-Fi Datasets for Human Activity Recognition

    • ieee-dataport.org
    Updated May 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Khandaker Foysal Haque (2025). CSI-BFI-HAR: Wi-Fi Datasets for Human Activity Recognition [Dataset]. https://ieee-dataport.org/documents/csi-bfi-har-wi-fi-datasets-human-activity-recognition
    Explore at:
    Dataset updated
    May 22, 2025
    Authors
    Khandaker Foysal Haque
    Description

    developing robust and domain-adaptive models remains challenging without diverse datasets.

  5. h

    Capture-24: Activity tracker dataset for human activity recognition

    • healthdatagateway.org
    • ora.ox.ac.uk
    unknown
    Updated Feb 7, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Oxford (2022). Capture-24: Activity tracker dataset for human activity recognition [Dataset]. http://doi.org/10.5287/bodleian:NGx0JOMP5
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Feb 7, 2022
    Dataset authored and provided by
    University of Oxford
    License

    https://ora.ox.ac.uk/objects/uuid:99d7c092-d865-4a19-b096-cc16440cd001https://ora.ox.ac.uk/objects/uuid:99d7c092-d865-4a19-b096-cc16440cd001

    Description

    This dataset contains Axivity AX3 wrist-worn activity tracker data that were collected from 151 participants in 2014-2016 around the Oxfordshire area. Participants were asked to wear the device in daily living for a period of roughly 24 hours, amounting to a total of almost 4,000 hours. Vicon Autograph wearable cameras and Whitehall II sleep diaries were used to obtain the ground truth activities performed during the period (e.g. sitting watching TV, walking the dog, washing dishes, sleeping), resulting in more than 2,500 hours of labelled data. Accompanying code to analyse this data is available at https://github.com/activityMonitoring/capture24. The following papers describe the data collection protocol in full: i.) Gershuny J, Harms T, Doherty A, Thomas E, Milton K, Kelly P, Foster C (2020) Testing self-report time-use diaries against objective instruments in real time. Sociological Methodology doi: 10.1177/0081175019884591; ii.) Willetts M, Hollowell S, Aslett L, Holmes C, Doherty A. (2018) Statistical machine learning of sleep and physical activity phenotypes from sensor data in 96,220 UK Biobank participants. Scientific Reports. 8(1):7961. Regarding Data Protection, the Clinical Data Set will not include any direct subject identifiers. However, it is possible that the Data Set may contain certain information that could be used in combination with other information to identify a specific individual, such as a combination of activities specific to that individual ("Personal Data"). Accordingly, in the conduct of the Analysis, users will comply with all applicable laws and regulations relating to information privacy. Further, the user agrees to preserve the confidentiality of, and not attempt to identify, individuals in the Data Set.

  6. Comparative Analysis of Artificial Hydrocarbon Networks an Data-Driven...

    • figshare.com
    html
    Updated Dec 31, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LUIS MIRALLES (2016). Comparative Analysis of Artificial Hydrocarbon Networks an Data-Driven Approaches for Human Activity Recognition [Dataset]. http://doi.org/10.6084/m9.figshare.4508744.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Dec 31, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    LUIS MIRALLES
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In recent years computing and sensing technologies advances contribute to develop effective human activity recognition systems. In context-aware and ambient assistive living applications, classification of body postures and movements, aids in the development of health systems that improve the quality of life of the disabled and the elderly. In this paper we describe a comparative analysis of data-driven activity recognition techniques against a novel supervised learning technique called artificial hydrocarbon networks (AHN). We prove that artificial hydrocarbon networks are suitable for efficient body postures and movements classification, providing a comparison between its performance and other well-known supervised learning methods.

  7. w

    Global Action Recognition Market Research Report: By Recognition Type...

    • wiseguyreports.com
    Updated Jul 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Action Recognition Market Research Report: By Recognition Type (Object Recognition, Human Activity Recognition, Gesture Recognition, Facial Expression Recognition), By Application (Surveillance, Healthcare, Entertainment, Automotive, Manufacturing), By Technology (Computer Vision, Machine Learning, Deep Learning, Convolutional Neural Networks), By Deployment Mode (Cloud-based, On-premise, Edge-based), By End-User (Government, Enterprise, Consumers) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/action-recognition-market
    Explore at:
    Dataset updated
    Jul 18, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 7, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 202316.88(USD Billion)
    MARKET SIZE 202420.89(USD Billion)
    MARKET SIZE 2032114.8(USD Billion)
    SEGMENTS COVEREDRecognition Type ,Application ,Technology ,Deployment Mode ,End-User ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSAIpowered surveillance systems Adoption in healthcare and sports Edge computing advancements
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDSony ,Microsoft ,Alibaba ,Huawei ,Apple ,Tencent ,NVIDIA ,Samsung ,Qualcomm ,LG ,Intel ,Google ,Amazon ,Baidu ,Panasonic
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESEdge computing and 5G network integration Contactless gesture recognition applications Healthcare and rehabilitation technologies Industrial automation and robotics Retail and customer experience enhancements
    COMPOUND ANNUAL GROWTH RATE (CAGR) 23.74% (2025 - 2032)
  8. USI-HEAR Dataset

    • zenodo.org
    • data.niaid.nih.gov
    Updated Nov 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matías Laporte; Matías Laporte; Davide Casnici; Davide Casnici; Martin Gjoreski; Shkurta Gashi; Shkurta Gashi; Silvia Santini; Silvia Santini; Marc Langheinrich; Martin Gjoreski; Marc Langheinrich (2024). USI-HEAR Dataset [Dataset]. http://doi.org/10.5281/zenodo.10843791
    Explore at:
    Dataset updated
    Nov 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Matías Laporte; Matías Laporte; Davide Casnici; Davide Casnici; Martin Gjoreski; Shkurta Gashi; Shkurta Gashi; Silvia Santini; Silvia Santini; Marc Langheinrich; Martin Gjoreski; Marc Langheinrich
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Aug 2022
    Description

    This is the open repository for the USI-HEAR dataset.

    USI-HEAR is a dataset containing inertial data collected from Nokia Bell Labs' eSense earbuds. The eSense's left unit contains a 6-axis IMU (i.e., 3-axis accelerometer and 3-axis gyroscope). The dataset is comprised of data collected from 30 different participants performing 7 scripted activities (headshaking, nodding, speaking, eating, staying still, walking, and walking while speaking). Each activity was recorded over ~180 seconds. Data sampling rate is variable (with a universal lower-bound of ~60Hz) due to Android's API limitations.

    Current contents:

    • raw_data.zip: raw sensor data, with participants' demographic information
    • dataset_preprocessed.zip: pre-processed data used for the corresponding publications (for reproducibility purposes)

    Until the main publication corresponding to the dataset is available, we kindly ask you to contact the first author to request access to the data.

    In future versions, the repository will also include:

    • processed data
      • downsampled versions
      • extracted features
    • code for the data analysis related to the (yet-unpublished) dataset's publication, containing a HAR pipeline analysis with both ML and DL techniques
  9. Dance Classification

    • kaggle.com
    zip
    Updated May 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ng Wei Jie, Brandon (2021). Dance Classification [Dataset]. https://www.kaggle.com/nwjbrandon/human-activity-recognition-dancing
    Explore at:
    zip(7069192 bytes)Available download formats
    Dataset updated
    May 4, 2021
    Authors
    Ng Wei Jie, Brandon
    Description

    Human Activity Recognition (Dance Classification)

    The data is collected for CG4002 school project. The machine learning part of the project aims to classify different dance moves.

    Content

    There are 9 dance moves - dab, elbowkick, gun, hair, listen, logout, pointhigh, sidepump, and wipetable. The data is in time series and is in raw form - yaw, pitch roll, gyrox, gyroy, gyroz, accx, accy, and accz. The files are labelled according to the corresponding subject, dance move, and trial number. Using subject3/listen2.csv as an example, the data in this file belongs to subject 3 who is performing the dance move listen for the second time. The data is collected from a sensor on the right hand for 1 min at 25Hz. The subject performs the data collection 3 times for 9 dance moves. The subject starts from a standing still position.

    Acknowledgements

    I would like to thanks for teammates in CG4002 Group 18 in 2020/2021 Semester 2 for the time and effort into the project.

    Inspiration

    I hope to see different ways of classifying the dance moves.

  10. SONAR: A Nursing Activity Dataset with Inertial Sensors - Machine Learning...

    • zenodo.org
    zip
    Updated Oct 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konak; Konak (2023). SONAR: A Nursing Activity Dataset with Inertial Sensors - Machine Learning Version [Dataset]. http://doi.org/10.5281/zenodo.7881952
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 5, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Konak; Konak
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accurate and comprehensive nursing documentation is essential to ensure quality patient care. To streamline this process, we present SONAR, a publicly available dataset of nursing activities recorded using inertial sensors in a nursing home. The dataset includes 14 sensor streams, such as acceleration and angular velocity, and 23 activities recorded by 14 caregivers using five sensors for 61.7 hours. The caregivers wore the sensors as they performed their daily tasks, allowing for continuous monitoring of their activities.

  11. f

    vioHAR Output Dataset

    • figshare.com
    zip
    Updated Feb 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kazi Md. Shahiduzzaman (2025). vioHAR Output Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.28515575.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 28, 2025
    Dataset provided by
    figshare
    Authors
    Kazi Md. Shahiduzzaman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is designed for Human Activity Recognition (HAR) using visual odometry. It is derived from synchronized sensor data, including RGB images and IMU measurements (accelerometer and gyroscope), with extracted features as its output. The dataset captures 8 daily human activities of 14 subjects in diverse environments, providing a valuable resource for developing and evaluating HAR models.

  12. o

    HUMAN4D: A Human-Centric Multimodal Dataset for Motions Immersive Media

    • explore.openaire.eu
    Updated Aug 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anargyros Chatzitofis; Leonidas Saroglou; Prodromos Boutis; Petros Drakoulis; Nikolaos Zioulis; Shishir Subramanyam; Bart Kevelham; Caecilia Charbonnier; Pablo Cesar; Dimitrios Zarpalas; Stefanos Kollias; Petros Daras (2020). HUMAN4D: A Human-Centric Multimodal Dataset for Motions Immersive Media [Dataset]. http://doi.org/10.21227/rv2m-wh93
    Explore at:
    Dataset updated
    Aug 18, 2020
    Authors
    Anargyros Chatzitofis; Leonidas Saroglou; Prodromos Boutis; Petros Drakoulis; Nikolaos Zioulis; Shishir Subramanyam; Bart Kevelham; Caecilia Charbonnier; Pablo Cesar; Dimitrios Zarpalas; Stefanos Kollias; Petros Daras
    Description

    We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system.nbsp;By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data.nbsp;Despite the existence of multi-view color datasets captured with the use of hardware (HW) synchronization, to the best of our knowledge, HUMAN4D is the first and only public resource that provides volumetric depth maps with high synchronization precision due to the use of intra- and inter-sensor HW-SYNC.nbsp;Moreover, a spatio-temporally aligned scanned and rigged 3D character complements HUMAN4D to enable joint research on time-varying and high-quality dynamic meshes.nbsp;We provide evaluation baselines by benchmarking HUMAN4D with state-of-the-art human pose estimation and 3D compression methods.nbsp;For the former, we apply 2D and 3D pose estimation algorithms both on single- and multi-view data cues.nbsp;For the latter, we benchmark open-source 3D codecs on volumetric data respecting online volumetric video encoding and steady bit-rates.nbsp;Furthermore, qualitative and quantitative visual comparison between mesh-based volumetric data reconstructed in different qualities showcases the available options with respect to 4D representations.nbsp;HUMAN4D is introduced to the computer vision and graphics research communities to enable joint research on spatio-temporally aligned pose, volumetric, mRGBD and audio data cues.The dataset and its code are available online.

  13. c

    3D Kinect Total Body Database for Back Stretches

    • kilthub.cmu.edu
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Blake Capella; Deepak Subramanian; Roberta Klatzky; Daniel Siewiorek (2023). 3D Kinect Total Body Database for Back Stretches [Dataset]. http://doi.org/10.1184/R1/7999364.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Carnegie Mellon University
    Authors
    Blake Capella; Deepak Subramanian; Roberta Klatzky; Daniel Siewiorek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data was collected by a Kinect V2 as a set of X, Y, Z coordinates at 60 fps during 6 different yoga inspired back stretches. There are 541 files in the dataset, each containing position, velocity for 25 body joints. These joints include: Head, Neck, SpineShoulder, SpineMid, SpineBase, ShoulderRight, ShoulderLeft, HipRight, HipLeft, ElbowRight, WristRight, HandRight, HandTipRight, ThumbRight, ElbowLeft, WristLeft, HandLeft, HandTipLeft, ThumbLeft, KneeRight, AnkleRight, FootRight, KneeLeft, AnkleLeft, FootLeft. The program used to record this data was adapted from Thomas Sanchez Langeling’s skeleton recording code. The file was set to record data for each body part as a separate file, repeated for each exercise. Each bodypart for a specific exercise is stored in a distinct folder. These folders are named with the following convention: subjNumber_stretchName_trialNumber The subjNumber ranged from 0 – 8. The stretchName was one of the following: Mermaid, Seated, Sumo, Towel, Wall, Y. The trialNumber ranged from 0 – 9 and represented the repetition number. These coordinates were chosen to have an origin centered at the subject’s upper chest. The data was standardized to the following conditions: 1) Kinect placed at the height of 2 ft and 3 in 2) Subject consistently positioned 6.5 ft away from the camera with their chests facing the camera 3) Each participant completed 10 repetitions of each stretch before continuing on Data was collected from the following population: * Adults ages 18-21 * Females: 4 * Males: 5 The following types of pre-processing occurred at the time of data collection. Velocity Data: Calculated using a discrete derivative equation with a spacing of 5 frames chosen to reduce sensitivity of the velocity function v[n]=(x[n]-x[n-5])/5 Occurs for all body parts and all axes individually Related manuscript: Capella, B., Subrmanian, D., Klatzky, R., & Siewiorek, D. Action Pose Recognition from 3D Camera Data Using Inter-frame and Inter-joint Dependencies. Preprint at link in references.

  14. f

    Dataset of continuous human activities performed in arbitrary directions...

    • figshare.com
    • data.4tu.nl
    bin
    Updated Nov 25, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ronny Gerhard Guendel; Matteo Unterhorst; Francesco Fioranelli; Alexander Yarovoy (2022). Dataset of continuous human activities performed in arbitrary directions collected with a distributed radar network of five nodes [Dataset]. http://doi.org/10.4121/16691500.v3
    Explore at:
    binAvailable download formats
    Dataset updated
    Nov 25, 2022
    Dataset provided by
    4TU.ResearchData
    Authors
    Ronny Gerhard Guendel; Matteo Unterhorst; Francesco Fioranelli; Alexander Yarovoy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Please review the document README_v2.pdf

    The data can be extracted with the MATLAB Live Script dataread.mlx.

    Referencing the dataset Guendel, Ronny Gerhard; Unterhorst, Matteo; Fioranelli, Francesco; Yarovoy, Alexander (2021): Dataset of continuous human activities performed in arbitrary directions collected with a distributed radar network of five nodes. 4TU.ResearchData. Dataset. https://doi.org/10.4121/16691500.v3

    @misc{Guendel2022, author = "Ronny Gerhard Guendel and Matteo Unterhorst and Francesco Fioranelli and Alexander Yarovoy", title = "{Dataset of continuous human activities performed in arbitrary directions collected with a distributed radar network of five nodes}", year = "2021", month = "Nov", url = "https://data.4tu.nl/articles/dataset/Dataset_of_continuous_human_activities_performed_in_arbitrary_directions_collected_with_a_distributed_radar_network_of_five_nodes/16691500", doi = "10.4121/16691500.v3" }

    Paper references are: Guendel, R.G., Fioranelli, F.,Yarovoy, A.: Distributed radar fusion and recurrent networks for classification of continuous human activities. IET Radar Sonar Navig. 1–18 (2022). https://doi.org/10.1049/rsn2.12249

    R. G. Guendel, F. Fioranelli and A. Yarovoy, "Evaluation Metrics for Continuous Human Activity Classification Using Distributed Radar Networks," 2022 IEEE Radar Conference (RadarConf22), 2022, pp. 1-6, doi: 10.1109/RadarConf2248738.2022.9764181.

    R. G. Guendel, M. Unterhorst, E. Gambi, F. Fioranelli and A. Yarovoy, "Continuous human activity recognition for arbitrary directions with distributed radars," 2021 IEEE Radar Conference (RadarConf21), 2021, pp. 1-6, doi: 10.1109/RadarConf2147009.2021.9454972.

  15. g

    Human Action Detection – Artificial Intelligence.

    • gts.ai
    json
    Updated May 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Human Action Detection – Artificial Intelligence. [Dataset]. https://gts.ai/dataset-download/human-action-detection-ai-dataset-download-now/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    May 31, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Human action detection in the context of artificial intelligence refers to the process of identifying and classifying human actions or activities from visual data, such as images or videos, using machine learning and computer vision techniques..

  16. f

    Convolutional and recurrent neural network for human activity recognition:...

    • plos.figshare.com
    docx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Hernandez; Tomoya Suzuki; Gentiane Venture (2023). Convolutional and recurrent neural network for human activity recognition: Application on American sign language [Dataset]. http://doi.org/10.1371/journal.pone.0228869
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Vincent Hernandez; Tomoya Suzuki; Gentiane Venture
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Human activity recognition is an important and difficult topic to study because of the important variability between tasks repeated several times by a subject and between subjects. This work is motivated by providing time-series signal classification and a robust validation and test approaches. This study proposes to classify 60 signs from the American Sign Language based on data provided by the LeapMotion sensor by using different conventional machine learning and deep learning models including a model called DeepConvLSTM that integrates convolutional and recurrent layers with Long-Short Term Memory cells. A kinematic model of the right and left forearm/hand/fingers/thumb is proposed as well as the use of a simple data augmentation technique to improve the generalization of neural networks. DeepConvLSTM and convolutional neural network demonstrated the highest accuracy compared to other models with 91.1 (3.8) and 89.3 (4.0) % respectively compared to the recurrent neural network or multi-layer perceptron. Integrating convolutional layers in a deep learning model seems to be an appropriate solution for sign language recognition with depth sensors data.

  17. uActivity

    • figshare.com
    zip
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kazi Md. Shahiduzzaman (2025). uActivity [Dataset]. http://doi.org/10.6084/m9.figshare.28538006.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 10, 2025
    Dataset provided by
    figshare
    Authors
    Kazi Md. Shahiduzzaman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Trajectory-based dataset using IMU of the smartphone.

  18. DATASET_SDALLE

    • zenodo.org
    • data.niaid.nih.gov
    bin
    Updated Feb 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed Abdellatif Hamed IBRAHIM; Ahmed Abdellatif Hamed IBRAHIM; Mohamed Atef El-Khoreby; Mohamed Atef El-Khoreby; Azza Kamal Moawad; Azza Kamal Moawad; Hanady Hussien Issa; Hanady Hussien Issa; Shereen Ismail Fawaz; Mohammed Ibrahim Awad; Mohammed Ibrahim Awad; Mohamed Khaled Farouk; Shereen Ismail Fawaz; Mohamed Khaled Farouk (2025). DATASET_SDALLE [Dataset]. http://doi.org/10.5281/zenodo.14841611
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ahmed Abdellatif Hamed IBRAHIM; Ahmed Abdellatif Hamed IBRAHIM; Mohamed Atef El-Khoreby; Mohamed Atef El-Khoreby; Azza Kamal Moawad; Azza Kamal Moawad; Hanady Hussien Issa; Hanady Hussien Issa; Shereen Ismail Fawaz; Mohammed Ibrahim Awad; Mohammed Ibrahim Awad; Mohamed Khaled Farouk; Shereen Ismail Fawaz; Mohamed Khaled Farouk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 2025
    Description

    The goal of this research is to capture and record precise lower-body muscle activity during activities such as walking, jumping, and stair navigation, all with the aim of designing a lower-limb exoskeleton to enhance mobility and rehabilitation.

    This dataset is collected from 9 healthy subjects. SDALLE DAQ system is used for data collection with the inclusion of EMG and IMU sensors. Data has been extracted from the following muscles: (Rectus Femoris, Vastus Medialis, Vastus Lateralis, and Semitendinosus muscles on both the left and right sides). The intended activities are walking, jogging, stairs up and stairs down

    This dataset is collected as part of the work done on the research project "Development of a Smart Data Acquisition system for Lower Limb Exoskeletons (SDALLE)" which is funded by Information Technology Industry Development Agency (ITIDA) – Information Technology Academia Collaboration (ITAC) program named grant CFP243/PRP.

  19. i

    Computer Vision Human Action Recognition for Electronic Device Assembly...

    • ieee-dataport.org
    Updated Jul 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chao-Lung Yang (2025). Computer Vision Human Action Recognition for Electronic Device Assembly (EDA) [Dataset]. https://ieee-dataport.org/documents/computer-vision-human-action-recognition-electronic-device-assembly-eda
    Explore at:
    Dataset updated
    Jul 7, 2025
    Authors
    Chao-Lung Yang
    Description

    000 frames

  20. d

    NTNU-HARChildren for the Validation of HAR-models for typically developing...

    • search.dataone.org
    • dataverse.no
    • +1more
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tørring, Marte Fossflaten; Logacjov, Aleksej; Ustad, Astrid; Brændvik, Siri Merete; Roeleveld, Karin; Bardal, Ellen Marie (2024). NTNU-HARChildren for the Validation of HAR-models for typically developing children and children with Cerebral Palsy [Dataset]. http://doi.org/10.18710/EPCXCC
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    DataverseNO
    Authors
    Tørring, Marte Fossflaten; Logacjov, Aleksej; Ustad, Astrid; Brændvik, Siri Merete; Roeleveld, Karin; Bardal, Ellen Marie
    Description

    The data set include 63 typically developing (TD) children and 16 children with Cerebral Palsy(CP), Gross Motor Functions Classification System (GMFCS) I and II wearing two accelerometers, one on the lower back and one on the thigh, together with the corresponding video annotations of activities. The files includes accelerometer signal and annotation for each study subject. The files named 001-120 is individuals with CP, and the files named PM01-16 and TD01-48 are typically developing children. The study protocol was approved by the Regional Committee for Medical and Health research ethics (reference no. nr:2016/707/REK nord) and the Norwegian Center for Research Data (NSD-nr:50683). All participants and guardians signed a written informed consent before being enrolled in the study. The NTNU-HAR-children was used to validate a machine learning model for activity recognition in the paper "Validation of two novel human activity recognition models for typically developing children and children with Cerebral Palsy." (https://doi.org/10.1371/journal.pone.0308853). For code used in original paper see: https://github.com/ntnu-ai-lab/harth-ml-experiments.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nurul Choudhury (2021). HARSense: Statistical Human Activity Recognition Dataset [Dataset]. https://ieee-dataport.org/open-access/harsense-statistical-human-activity-recognition-dataset

HARSense: Statistical Human Activity Recognition Dataset

Explore at:
6 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 15, 2021
Authors
Nurul Choudhury
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

ContextThis dataset consists of subject wise daily living activity data

Search
Clear search
Close search
Google apps
Main menu