Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ContextThis dataset consists of subject wise daily living activity data
The Human Activity Recognition Dataset has been collected from 30 subjects performing six different activities (Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing, Laying). It consists of inertial sensor data that was collected using a smartphone carried by the subjects.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The dataset is collected from 15 participants wearing 5 Shimmer wearable sensor nodes on the locations listed in Table 1. The participants performed a series of 16 activities (7 basic and 9 postural transitions), listed in Table 2.
The captured signals are the following:
The sampling rate of the devices is set to 51.2 Hz.
DATASET FILES
The dataset contains the following files:
Where X corresponds to the participant ID, and numbers 1-5 to the device IDs indicated in Table 1.
Each .csv file has the following format:
Table 1: LOCATIONS
Table 2: ACTIVITY LABELS
(Arrows (->) indicate transitions between activities)
developing robust and domain-adaptive models remains challenging without diverse datasets.
https://ora.ox.ac.uk/objects/uuid:99d7c092-d865-4a19-b096-cc16440cd001https://ora.ox.ac.uk/objects/uuid:99d7c092-d865-4a19-b096-cc16440cd001
This dataset contains Axivity AX3 wrist-worn activity tracker data that were collected from 151 participants in 2014-2016 around the Oxfordshire area. Participants were asked to wear the device in daily living for a period of roughly 24 hours, amounting to a total of almost 4,000 hours. Vicon Autograph wearable cameras and Whitehall II sleep diaries were used to obtain the ground truth activities performed during the period (e.g. sitting watching TV, walking the dog, washing dishes, sleeping), resulting in more than 2,500 hours of labelled data. Accompanying code to analyse this data is available at https://github.com/activityMonitoring/capture24. The following papers describe the data collection protocol in full: i.) Gershuny J, Harms T, Doherty A, Thomas E, Milton K, Kelly P, Foster C (2020) Testing self-report time-use diaries against objective instruments in real time. Sociological Methodology doi: 10.1177/0081175019884591; ii.) Willetts M, Hollowell S, Aslett L, Holmes C, Doherty A. (2018) Statistical machine learning of sleep and physical activity phenotypes from sensor data in 96,220 UK Biobank participants. Scientific Reports. 8(1):7961. Regarding Data Protection, the Clinical Data Set will not include any direct subject identifiers. However, it is possible that the Data Set may contain certain information that could be used in combination with other information to identify a specific individual, such as a combination of activities specific to that individual ("Personal Data"). Accordingly, in the conduct of the Analysis, users will comply with all applicable laws and regulations relating to information privacy. Further, the user agrees to preserve the confidentiality of, and not attempt to identify, individuals in the Data Set.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In recent years computing and sensing technologies advances contribute to develop effective human activity recognition systems. In context-aware and ambient assistive living applications, classification of body postures and movements, aids in the development of health systems that improve the quality of life of the disabled and the elderly. In this paper we describe a comparative analysis of data-driven activity recognition techniques against a novel supervised learning technique called artificial hydrocarbon networks (AHN). We prove that artificial hydrocarbon networks are suitable for efficient body postures and movements classification, providing a comparison between its performance and other well-known supervised learning methods.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 16.88(USD Billion) |
MARKET SIZE 2024 | 20.89(USD Billion) |
MARKET SIZE 2032 | 114.8(USD Billion) |
SEGMENTS COVERED | Recognition Type ,Application ,Technology ,Deployment Mode ,End-User ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | AIpowered surveillance systems Adoption in healthcare and sports Edge computing advancements |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Sony ,Microsoft ,Alibaba ,Huawei ,Apple ,Tencent ,NVIDIA ,Samsung ,Qualcomm ,LG ,Intel ,Google ,Amazon ,Baidu ,Panasonic |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Edge computing and 5G network integration Contactless gesture recognition applications Healthcare and rehabilitation technologies Industrial automation and robotics Retail and customer experience enhancements |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 23.74% (2025 - 2032) |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the open repository for the USI-HEAR dataset.
USI-HEAR is a dataset containing inertial data collected from Nokia Bell Labs' eSense earbuds. The eSense's left unit contains a 6-axis IMU (i.e., 3-axis accelerometer and 3-axis gyroscope). The dataset is comprised of data collected from 30 different participants performing 7 scripted activities (headshaking, nodding, speaking, eating, staying still, walking, and walking while speaking). Each activity was recorded over ~180 seconds. Data sampling rate is variable (with a universal lower-bound of ~60Hz) due to Android's API limitations.
Current contents:
Until the main publication corresponding to the dataset is available, we kindly ask you to contact the first author to request access to the data.
In future versions, the repository will also include:
The data is collected for CG4002 school project. The machine learning part of the project aims to classify different dance moves.
There are 9 dance moves - dab, elbowkick, gun, hair, listen, logout, pointhigh, sidepump, and wipetable. The data is in time series and is in raw form - yaw, pitch roll, gyrox, gyroy, gyroz, accx, accy, and accz. The files are labelled according to the corresponding subject, dance move, and trial number. Using subject3/listen2.csv as an example, the data in this file belongs to subject 3 who is performing the dance move listen for the second time. The data is collected from a sensor on the right hand for 1 min at 25Hz. The subject performs the data collection 3 times for 9 dance moves. The subject starts from a standing still position.
I would like to thanks for teammates in CG4002 Group 18 in 2020/2021 Semester 2 for the time and effort into the project.
I hope to see different ways of classifying the dance moves.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accurate and comprehensive nursing documentation is essential to ensure quality patient care. To streamline this process, we present SONAR, a publicly available dataset of nursing activities recorded using inertial sensors in a nursing home. The dataset includes 14 sensor streams, such as acceleration and angular velocity, and 23 activities recorded by 14 caregivers using five sensors for 61.7 hours. The caregivers wore the sensors as they performed their daily tasks, allowing for continuous monitoring of their activities.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is designed for Human Activity Recognition (HAR) using visual odometry. It is derived from synchronized sensor data, including RGB images and IMU measurements (accelerometer and gyroscope), with extracted features as its output. The dataset captures 8 daily human activities of 14 subjects in diverse environments, providing a valuable resource for developing and evaluating HAR models.
We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system.nbsp;By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data.nbsp;Despite the existence of multi-view color datasets captured with the use of hardware (HW) synchronization, to the best of our knowledge, HUMAN4D is the first and only public resource that provides volumetric depth maps with high synchronization precision due to the use of intra- and inter-sensor HW-SYNC.nbsp;Moreover, a spatio-temporally aligned scanned and rigged 3D character complements HUMAN4D to enable joint research on time-varying and high-quality dynamic meshes.nbsp;We provide evaluation baselines by benchmarking HUMAN4D with state-of-the-art human pose estimation and 3D compression methods.nbsp;For the former, we apply 2D and 3D pose estimation algorithms both on single- and multi-view data cues.nbsp;For the latter, we benchmark open-source 3D codecs on volumetric data respecting online volumetric video encoding and steady bit-rates.nbsp;Furthermore, qualitative and quantitative visual comparison between mesh-based volumetric data reconstructed in different qualities showcases the available options with respect to 4D representations.nbsp;HUMAN4D is introduced to the computer vision and graphics research communities to enable joint research on spatio-temporally aligned pose, volumetric, mRGBD and audio data cues.The dataset and its code are available online.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data was collected by a Kinect V2 as a set of X, Y, Z coordinates at 60 fps during 6 different yoga inspired back stretches. There are 541 files in the dataset, each containing position, velocity for 25 body joints. These joints include: Head, Neck, SpineShoulder, SpineMid, SpineBase, ShoulderRight, ShoulderLeft, HipRight, HipLeft, ElbowRight, WristRight, HandRight, HandTipRight, ThumbRight, ElbowLeft, WristLeft, HandLeft, HandTipLeft, ThumbLeft, KneeRight, AnkleRight, FootRight, KneeLeft, AnkleLeft, FootLeft. The program used to record this data was adapted from Thomas Sanchez Langeling’s skeleton recording code. The file was set to record data for each body part as a separate file, repeated for each exercise. Each bodypart for a specific exercise is stored in a distinct folder. These folders are named with the following convention: subjNumber_stretchName_trialNumber The subjNumber ranged from 0 – 8. The stretchName was one of the following: Mermaid, Seated, Sumo, Towel, Wall, Y. The trialNumber ranged from 0 – 9 and represented the repetition number. These coordinates were chosen to have an origin centered at the subject’s upper chest. The data was standardized to the following conditions: 1) Kinect placed at the height of 2 ft and 3 in 2) Subject consistently positioned 6.5 ft away from the camera with their chests facing the camera 3) Each participant completed 10 repetitions of each stretch before continuing on Data was collected from the following population: * Adults ages 18-21 * Females: 4 * Males: 5 The following types of pre-processing occurred at the time of data collection. Velocity Data: Calculated using a discrete derivative equation with a spacing of 5 frames chosen to reduce sensitivity of the velocity function v[n]=(x[n]-x[n-5])/5 Occurs for all body parts and all axes individually Related manuscript: Capella, B., Subrmanian, D., Klatzky, R., & Siewiorek, D. Action Pose Recognition from 3D Camera Data Using Inter-frame and Inter-joint Dependencies. Preprint at link in references.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Please review the document README_v2.pdf
The data can be extracted with the MATLAB Live Script dataread.mlx.
Referencing the dataset Guendel, Ronny Gerhard; Unterhorst, Matteo; Fioranelli, Francesco; Yarovoy, Alexander (2021): Dataset of continuous human activities performed in arbitrary directions collected with a distributed radar network of five nodes. 4TU.ResearchData. Dataset. https://doi.org/10.4121/16691500.v3
@misc{Guendel2022, author = "Ronny Gerhard Guendel and Matteo Unterhorst and Francesco Fioranelli and Alexander Yarovoy", title = "{Dataset of continuous human activities performed in arbitrary directions collected with a distributed radar network of five nodes}", year = "2021", month = "Nov", url = "https://data.4tu.nl/articles/dataset/Dataset_of_continuous_human_activities_performed_in_arbitrary_directions_collected_with_a_distributed_radar_network_of_five_nodes/16691500", doi = "10.4121/16691500.v3" }
Paper references are: Guendel, R.G., Fioranelli, F.,Yarovoy, A.: Distributed radar fusion and recurrent networks for classification of continuous human activities. IET Radar Sonar Navig. 1–18 (2022). https://doi.org/10.1049/rsn2.12249
R. G. Guendel, F. Fioranelli and A. Yarovoy, "Evaluation Metrics for Continuous Human Activity Classification Using Distributed Radar Networks," 2022 IEEE Radar Conference (RadarConf22), 2022, pp. 1-6, doi: 10.1109/RadarConf2248738.2022.9764181.
R. G. Guendel, M. Unterhorst, E. Gambi, F. Fioranelli and A. Yarovoy, "Continuous human activity recognition for arbitrary directions with distributed radars," 2021 IEEE Radar Conference (RadarConf21), 2021, pp. 1-6, doi: 10.1109/RadarConf2147009.2021.9454972.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Human action detection in the context of artificial intelligence refers to the process of identifying and classifying human actions or activities from visual data, such as images or videos, using machine learning and computer vision techniques..
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Human activity recognition is an important and difficult topic to study because of the important variability between tasks repeated several times by a subject and between subjects. This work is motivated by providing time-series signal classification and a robust validation and test approaches. This study proposes to classify 60 signs from the American Sign Language based on data provided by the LeapMotion sensor by using different conventional machine learning and deep learning models including a model called DeepConvLSTM that integrates convolutional and recurrent layers with Long-Short Term Memory cells. A kinematic model of the right and left forearm/hand/fingers/thumb is proposed as well as the use of a simple data augmentation technique to improve the generalization of neural networks. DeepConvLSTM and convolutional neural network demonstrated the highest accuracy compared to other models with 91.1 (3.8) and 89.3 (4.0) % respectively compared to the recurrent neural network or multi-layer perceptron. Integrating convolutional layers in a deep learning model seems to be an appropriate solution for sign language recognition with depth sensors data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Trajectory-based dataset using IMU of the smartphone.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The goal of this research is to capture and record precise lower-body muscle activity during activities such as walking, jumping, and stair navigation, all with the aim of designing a lower-limb exoskeleton to enhance mobility and rehabilitation.
This dataset is collected from 9 healthy subjects. SDALLE DAQ system is used for data collection with the inclusion of EMG and IMU sensors. Data has been extracted from the following muscles: (Rectus Femoris, Vastus Medialis, Vastus Lateralis, and Semitendinosus muscles on both the left and right sides). The intended activities are walking, jogging, stairs up and stairs down
This dataset is collected as part of the work done on the research project "Development of a Smart Data Acquisition system for Lower Limb Exoskeletons (SDALLE)" which is funded by Information Technology Industry Development Agency (ITIDA) – Information Technology Academia Collaboration (ITAC) program named grant CFP243/PRP.
000 frames
The data set include 63 typically developing (TD) children and 16 children with Cerebral Palsy(CP), Gross Motor Functions Classification System (GMFCS) I and II wearing two accelerometers, one on the lower back and one on the thigh, together with the corresponding video annotations of activities. The files includes accelerometer signal and annotation for each study subject. The files named 001-120 is individuals with CP, and the files named PM01-16 and TD01-48 are typically developing children. The study protocol was approved by the Regional Committee for Medical and Health research ethics (reference no. nr:2016/707/REK nord) and the Norwegian Center for Research Data (NSD-nr:50683). All participants and guardians signed a written informed consent before being enrolled in the study. The NTNU-HAR-children was used to validate a machine learning model for activity recognition in the paper "Validation of two novel human activity recognition models for typically developing children and children with Cerebral Palsy." (https://doi.org/10.1371/journal.pone.0308853). For code used in original paper see: https://github.com/ntnu-ai-lab/harth-ml-experiments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ContextThis dataset consists of subject wise daily living activity data