Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
LARa Version 02 is a freely accessible logistics-dataset for human activity recognition. In the ’Innovationlab Hybrid Services in Logistics’ at TU Dortmund University, two picking and one packing scenarios with 16 subjects were recorded using an optical marker-based Motion Capturing system (OMoCap), Inertial Measurement Units (IMUs), and an RGB camera. Each subject was recorded for one hour (960 minutes in total). All the given data have been labeled and categorised into eight activity classes and 19 binary coarse-semantic descriptions, also called attributes. In total, the dataset contains 221 unique attribute representations.
You can find the latest version of the annotation tool here: https://github.com/wilfer9008/Annotation_Tool_LARa
Upgrade:
If you use this dataset for research, please cite the following paper: “LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes”, Sensors 2020, DOI: 10.3390/s20154083.
If you use the Mbientlab Networks, please cite the following paper: “From Human Pose to On-Body Devices for Human-Activity Recognition”, 25th International Conference on Pattern Recognition (ICPR), 2021, DOI: 10.1109/ICPR48806.2021.9412283.
If you have any questions about the dataset, please contact friedrich.niemann@tu-dortmund.de.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 5.22(USD Billion) |
MARKET SIZE 2024 | 5.9(USD Billion) |
MARKET SIZE 2032 | 15.7(USD Billion) |
SEGMENTS COVERED | Service Type ,Application ,Technology ,End-User Industry ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | AI and ML advancements Selfdriving car technology Growing healthcare applications Increasing image content Automation and efficiency |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Scale AI ,Anolytics ,Sama ,Hive ,Keymakr ,Mighty AI ,Labelbox ,SuperAnnotate ,TaskUs ,Veritone ,Cogito Tech ,CloudFactory ,Appen ,Figure Eight ,Lionbridge AI |
MARKET FORECAST PERIOD | 2024 - 2032 |
KEY MARKET OPPORTUNITIES | 1 Advancements in AI and ML 2 Rising demand from ecommerce 3 Growth in autonomous vehicles 4 Increasing focus on data privacy 5 Emergence of cloudbased annotation tools |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 13.01% (2024 - 2032) |
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
OpenPack is an open access logistics-dataset for human activity recognition, which contains human movement and package information from 10 subjects in four scenarios. Human movement information is subdivided into three types of data, acceleration, physiological, and depth-sensing. The package information includes the size and number of items included in each packaging job.
In the "Humanware laboratory" at IST Osaka University, with the supervision of industrial engineers, an experiment to mimic logistic center labor was designed. Workers with previous packaging experience performed a set of packaging tasks according to an instruction manual from a real-life logistics center. During the different scenarios, subjects were recorded while performing packing operations using Lidar, Kinect, and Realsense depth sensors while also wearing 4 IMU devices and 2 Empatica E4 wearable sensors. Besides sensor data, this dataset contains timestamp information collected from the handy terminal used to register product, packet, and address label codes as well as package details that can be useful to relate operations to specific packages.
The 4 different scenarios include; sequential packing, mixed items collection, pre-ordered items, and time-sensitive stressors. Each of the subjects performed 20 packing jobs in a total of 5 work sessions for a total of 100 packing jobs. Approximately 50 hours of packaging operations have been labeled into 10 global operation classes and 16 sub-action classes for this dataset. Action classes are not unique to each operation but may only appear in one or two operations.
Tutorial Dataset -> Preprocessed Dataset (IMU with Operation Labels)
In this repository (Full Dataset), the data and label files are contained in separate files, we have received many comments that it was difficult to combine them. Therefore, for tutorial purposes, we have created a number of CSV files containing the four IMU's sensor data and the operation labels. These files are now included in this version as "preprocessed-IMU-with-operation-labels.zip".
NOTE: Please be aware some operation labels have been slightly changed from those on version (v0.3.2) to correct annotation errors.
Work is continuously being done to update and improve this dataset. When downloading and using this dataset please verify that the version is up to date with the latest release. The latest release [1.0.0] was uploaded on 14/07/2022. You can find information on how to use this dataset at: https://open-pack.github.io/
We hosted an activity recognition competition using this dataset (OpenPack v0.3.x) awarded at a PerCom 2023 Workshop! The task was very simple: Recognize 10 work operations from the OpenPack dataset. You can refer to this website for coding materials relevant to this dataset. https://open-pack.github.io/challenge2022
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
UA_L-DoTT (University of Alabama’s Large Dataset of Trains and Trucks) is a collection of camera images and 3D LiDAR point cloud scans from five different data sites. Four of the data sites targeted trains on railways and the last targeted trucks on a four-lane highway. Low light conditions were present at one of the data sites showcasing unique differences between individual sensor data. The final data site utilized a mobile platform which created a large variety of view points in images and point clouds. The dataset consists of 93,397 raw images, 11,415 corresponding labeled text files, 354,334 raw point clouds, 77,860 corresponding labeled point clouds, and 33 timestamp files. These timestamps correlate images to point cloud scans via POSIX time. The data was collected with a sensor suite consisting of five different LiDAR sensors and a camera. This provides various viewpoints and features of the same targets due to the variance in operational characteristics of the sensors. The inclusion of both raw and labeled data allows users to get started immediately with the labeled subset, or label additional raw data as needed. This large dataset is beneficial to any researcher interested in machine learning using cameras, LiDARs, or both.
The full dataset is too large (~1 Tb) to be uploaded to Mendeley Data. Please see the attached link for access to the full dataset.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
LARa Version 03 is a freely accessible logistics-dataset for human activity recognition. In the “Innovationlab Hybrid Services in Logistics” at TU Dortmund University, two picking and one packing scenarios with 16 subjects were recorded using an optical marker-based Motion Capturing system (OMoCap), Inertial Measurement Units (IMUs), and an RGB camera. Each subject was recorded for one hour (960 minutes in total). All the given data have been labelled and categorised into eight activity classes and 19 binary coarse-semantic descriptions, also called attributes. In total, the dataset contains 221 unique attribute representations.
The dataset was created according to the guideline of the following paper: “A Tutorial on Dataset Creation for Sensor-based Human Activity Recognition”, PerCom, 2023 DOI: 10.1109/PerComWorkshops56833.2023.10150401
The LARa Version 03 contains a new Annotation tool for OMoCap and RGB Videos, namely, the Sequence Attribute Retrieval Annotator (SARA). SARA, developed and modified based on the LARa Version 02 annotation tool, includes desirable features and attempts to overcome limitations as found in the LARa annotation tool. Furthermore, few features were included based on the explorative study of previously developed annotation tools, see journal. In alignment with the LARa annotation tool, SARA focuses on OMoCap and video annotations. However, it is to be noted that SARA was not intended to be a video annotation tool with features such as subject tracking and multiple subject annotations. Here, the video is considered to be a supporting input to the OMoCap annotation. We would recommend other tools for pure video-based multiple-human activity annotation, including subject tracking, segmentation, and pose estimation. There are different ways of installing the annotation tool: Compiled binaries (executable files) for Windows and Mac can be directly downloaded from here. Python users can install the tool from https://pypi.org/project/annotation-tool/ (PyPi): “pip install annotation-tool”. For more information, please refer to the “Annotation Tool - Installation and User Manual”.
Upgrade:
If you use this dataset for research, please cite the following paper: “LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes”, Sensors 2020, DOI: 10.3390/s20154083.
If you use the Mbientlab Networks, please cite the following paper: “From Human Pose to On-Body Devices for Human-Activity Recognition”, 25th International Conference on Pattern Recognition (ICPR), 2021, DOI: 10.1109/ICPR48806.2021.9412283.
For any questions about the dataset, please contact Friedrich Niemann at friedrich.niemann@tu-dortmund.de.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Customs records of are available for NOTATION C.O. PORT LOGISTICS GROUP. Learn about its Importer, supply capabilities and the countries to which it supplies goods
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SPARL is a freely accessible data set for sensor-based activity recognition of pallets in logistics. The data set consists of 20 recordings from three scenarios. A description of the scenarios can be found in the protocol file.
Four different sensors were used simultaneously for all recordings:
MSR Electronics MSR 145
Sampling rate 50 Hz
MBIENTLAB MetaMotionS
Sampling rate 100 Hz
Kistler KiDaQ Module 5512A
Sampling rate 100 kHz
the raw data is also downsampled to 5 kHz and 20 kHz for easier processing
Holybro Flightcontroller PX4FMU
The board uses two accelerometers and two gyroscopes, all with a sampling rate of 1000 Hz
Accelerometer 1: IvenSense MPU6000
Accelerometer 2: STMicroelectronics LSM303D
Gyroscope 1: IvenSense MPU6000
Gyroscope 2: STMicroelectronics L3GD20
The recordings were accompanied by three logitech Mevo Start cameras, of which all recordings are included anonymously in the data set.
The videos were annotated by one person in each frame. For this purpose, the annotation tool SARA was used, which can be found here. The JSON schema used for annotation is also included in the SPARL dataset. The R code used our evaluation can be found in GitHub.
If you have any questions about the dataset, please contact: sven.franke@tu-dortmund.de
This dataset features over 10,000 high-quality images of packages sourced from photographers worldwide. Designed to support AI and machine learning applications, it provides a diverse and richly annotated collection of package imagery.
Key Features: 1. Comprehensive Metadata The dataset includes full EXIF data, detailing camera settings such as aperture, ISO, shutter speed, and focal length. Additionally, each image is pre-annotated with object and scene detection metadata, making it ideal for tasks like classification, detection, and segmentation. Popularity metrics, derived from engagement on our proprietary platform, are also included.
Unique Sourcing Capabilities The images are collected through a proprietary gamified platform for photographers. Competitions focused on package photography ensure fresh, relevant, and high-quality submissions. Custom datasets can be sourced on-demand within 72 hours, allowing for specific requirements such as packaging types (e.g., boxes, envelopes, branded parcels) or environmental settings (e.g., in transit, on doorsteps, in warehouses) to be met efficiently.
Global Diversity Photographs have been sourced from contributors in over 100 countries, ensuring a wide variety of packaging designs, shipping labels, languages, and handling conditions. The images cover diverse contexts, including retail shelves, delivery trucks, homes, and distribution centers, offering a comprehensive view of real-world packaging scenarios.
High-Quality Imagery The dataset includes images with resolutions ranging from standard to high-definition to meet the needs of various projects. Both professional and amateur photography styles are represented, offering a mix of artistic and functional perspectives suitable for a variety of applications.
Popularity Scores Each image is assigned a popularity score based on its performance in GuruShots competitions. This unique metric reflects how well the image resonates with a global audience, offering an additional layer of insight for AI models focused on user preferences or engagement trends.
AI-Ready Design This dataset is optimized for AI applications, making it ideal for training models in tasks such as package recognition, logistics automation, label detection, and condition analysis. It is compatible with a wide range of machine learning frameworks and workflows, ensuring seamless integration into your projects.
Licensing & Compliance The dataset complies fully with data privacy regulations and offers transparent licensing for both commercial and academic use.
Use Cases: 1. Training computer vision systems for package identification and tracking. 2. Enhancing logistics and supply chain AI models with real-world packaging visuals. 3. Supporting robotics and automation workflows in warehousing and delivery environments. 4. Developing datasets for augmented reality, retail shelf analysis, or smart delivery applications.
This dataset offers a comprehensive, diverse, and high-quality resource for training AI and ML models, tailored to deliver exceptional performance for your projects. Customizations are available to suit specific project needs. Contact us to learn more!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We collected and annotated a dataset containing 105,544 annotated vehicle instances from 24700 image frames within seven different videos, sourced online under creative commons license. The video frames are annotated using DarkLabel tool. In the interest of reusability and generalisation of the deep learning model, we consider the diversity within the collected dataset. This diversity includes changes of lighting amongst the video, as well as other factors such as weather conditions, angle of observation, varying speed of the moving vehicles, traffic flow, and road conditions etc. The videos collected obviously include stationary vehicles, to perform the validation of stopped vehicle detection method. It can be noticed that the road conditions (e.g., motorways, city, country roads), directions, data capture timings and camera views, vary in the dataset producing annotated dataset with diversity. the dataset may have several uses such as vehicle detection, vehicle identification, stopped vehicle detection on smart motorways and local roads (smart city applications) and many more.
The EU directives PSI and Inspire state that public data must be shared and the ITS directive requires open transport data. The purpose is to promote innovation in new services that society needs. But development is slow. There are many barriers to opening data. At the same time, finding and using open data is challenging. Knowledge is needed about how data should be opened to make it easy to find, understand and use the data.
The project has mapped barriers and success factors related to the use of open data, and tested various tools and solutions for publishing and using data. Based on this, advice has been drawn up on how open data should be published and used. The councils include: use of metadata, documentation, APIs and licenses. The tips are summarized here: http://opendatalab.no/
Experiments have also been carried out with automatic annotation (metadata registration) and semantic search for open data. The results show that this can work well if the data sets have good documentation in natural language.
Data are freely available for downloading after 01.06.2020.
description: This is a report detailing over 2 months worth of avian field work done by park service personnel on the Aniakchak coast. The report includes an annoted bird list, bald eagle nest cards, handwritten description of efforts and logistics, and weather data. There are also field records correlating bird species with dates viewed.; abstract: This is a report detailing over 2 months worth of avian field work done by park service personnel on the Aniakchak coast. The report includes an annoted bird list, bald eagle nest cards, handwritten description of efforts and logistics, and weather data. There are also field records correlating bird species with dates viewed.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The Northern Canada geodatabase contains a selection of the data from the Atlas of Canada Reference Map - Northern Canada / Nord du Canada (MCR 36). The geodatabase is comprised of two feature data sets (annotation and geometry), and the shaded relief. The annotation feature dataset comprises the annotation feature classes. All annotation feature classes were derived for MCR 36 and all text placements are based on the font type and size used for the reference map. The geometry feature dataset is comprised of data for: boundaries, roads, railways, airports, seaplane bases, ports, populated places, rivers, lakes, mines, oil/natural gas fields, hydroelectric generating stations, federal protected areas, ice shelves, permanent polar sea ice limit and the treeline. The geodatabase can be downloaded as feature data sets or as shapefiles.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Poribohon-BD is a vehicle dataset of 15 native vehicles of Bangladesh. The vehicles are: i) Bicycle, ii) Boat, iii) Bus, iv) Car, v) CNG, vi) Easy-bike, vii) Horse-cart, viii) Launch, ix) Leguna, x) Motorbike, xi) Rickshaw, xii) Tractor, xiii) Truck, xiv) Van, xv) Wheelbarrow. The dataset contains a total of 9058 images with a high diversity of poses, angles, lighting conditions, weather conditions, backgrounds. All of the images are in JPG format. The dataset also contains 9058 image annotation files. These files state the exact positions of the objects with labels in the corresponding image. The annotation has been performed manually and the annotated values are stored in XML files. LabelImg tool by Tzuta Lin has been used to label the images. Moreover, data augmentation techniques have been applied to keep the number of images comparable to each type of vehicle. Human faces have also been blurred to maintain privacy and confidentiality. The data files are divided into 15 individual folders. Each folder contains images and annotation files of one vehicle type. The 16th folder titled ‘Multi-class Vehicles’ contains images and annotation files of different types of vehicles. Poribohon-BD is compatible with various CNN architectures such as YOLO, VGG-16, R-CNN, DPM.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 12.11(USD Billion) |
MARKET SIZE 2024 | 14.37(USD Billion) |
MARKET SIZE 2032 | 56.6(USD Billion) |
SEGMENTS COVERED | Annotation Type ,Application ,Deployment Mode ,Industry Vertical ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | 1 Rising Demand for AIDriven Applications 2 Growing Adoption of Video Content 3 Advancements in Annotation Tools and Techniques 4 Increasing Focus on Data Quality 5 Government Initiatives and Regulations |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Lionbridge AINewparaScale AINewparaTagilo Inc.NewparaThe Labelbox ,Toloka ,Xilyxe ,Keymakr ,Wayfair ,CloudFactory ,Hive.ai (formerly SmartPixels) ,Dataloop ,Wide |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Automated data labeling Object detection and tracking AI model training |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 18.69% (2025 - 2032) |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
TAMPAR is a real-world dataset of parcel photos for tampering detection with annotations in COCO format. For details see our paper and for visual samples our project page. Features are:
Relevant computer vision tasks:
If you use this resource for scientific research, please consider citing our WACV 2024 paper "TAMPAR: Visual Tampering Detection for Parcel Logistics in Postal Supply Chains".
Synthetic dataset of over 13,000 images of damaged and intact parcels with full 2D and 3D annotations in the COCO format. For details see our paper and for visual samples our project page.
Relevant computer vision tasks:
The dataset is for academic research use only, since it uses resources with restrictive licenses.
For a detailed description of how the resources are used, we refer to our paper and project page.
Licenses of the resources in detail:
You can use our textureless models (i.e. the obj files) of damaged parcels under CC BY 4.0 (note that this does not apply to the textures).
If you use this resource for scientific research, please consider citing
@inproceedings{naumannParcel3DShapeReconstruction2023,
author = {Naumann, Alexander and Hertlein, Felix and D\"orr, Laura and Furmans, Kai},
title = {Parcel3D: Shape Reconstruction From Single RGB Images for Applications in Transportation Logistics},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2023},
pages = {4402-4412}
}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Real-world dataset of ~400 images of cuboid-shaped parcels with full 2D and 3D annotations in the COCO format.
Relevant computer vision tasks:
For details, see our paper and project page.
If you use this resource for scientific research, please consider citing
@inproceedings{naumannScrapeCutPasteLearn2022,
title = {Scrape, Cut, Paste and Learn: Automated Dataset Generation Applied to Parcel Logistics},
author = {Naumann, Alexander and Hertlein, Felix and Zhou, Benchun and Dörr, Laura and Furmans, Kai},
booktitle = {{{IEEE Conference}} on {{Machine Learning}} and Applications ({{ICMLA}})},
date = 2022
}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MotionMiners Miss-placement Dataset 𝑀𝑃1 is composed of recordings of seven subjects carrying out different activities in the intralogistics, using a sensor set-up of On-Body Devices (OBDs) for industrial applications. Here, the position and orientation of the OBD change with respect to the recording-and-usage guidelines. The OBDs are labeled with respect to their expected location on the human body, namely, 𝑂𝐵𝐷𝑅 , 𝑂𝐵𝐷𝐿 and 𝑂𝐵𝐷𝑇 on the right arm, left arm, and frontal torso. Tab. 1 (see manuscript) presents the different miss-placement classes of the dataset. This dataset considers the miss-placement as a classification problem; however, differently, the 𝑀𝑃 dataset considers rotations miss-placements—commonly appear on deployment from practitioners experience. The 𝑀𝑃 dataset contains recordings of seven subjects performing six activities: Standing, Walking, Handling Centred, Handling Upwards, Handling Downwards, and an additional Synchronisation. Each subject carried out each activity under the case of up to 15 different miss-placement situations (soon updating to 20 different miss-placement situations), including a correct set-up of the devices. The 𝑀𝑃 dataset is divided in two subsets, 𝑀𝑃_A and 𝑀𝑃_B. Each recording of a subject contains:
raw data of Acc, Gyr, and Mag in 3D for a certain number of samples, making a matrix of size [Samples times 27] annotated data of Acc, Gyr, and Mag in 3D for a certain number of samples, making a matrix of size [Samples, Act class, [27 channels]]
for MP_B, it includes the synchronized recording of the correct sensor set-up, so the matrix becomes [Samples, class, [27 channels of the miss-placed setup], [27 channels of the correct set up]] the miss-placement annotations [Samples, Miss-placement class] the activity annotations [Samples, activity class, [19 semantic attributes]]
the semantic attributes are given following the following paper: "LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes", Sensors 2020, DOI: 10.3390/s20154083. If you use this dataset for research, please cite the following paper: "Miss-placement Prediction of Multiple On-body Devices for Human Activity Recognition", Sensors 2020, DOI: 10.1145/3615834.3615838. For any questions about the dataset, please contact Fernando Moya Rueda at fernando.moya@motionminers.com.
Carte de base du Canada - Transport (CBCT). Ce service de cartographie Web offre un contexte de référence spatiale axé sur les réseaux de transport. Il est particulièrement conçu pour être utilisé comme fond de carte dans une application cartographique Web ou un système d'information géographique (SIG). L'accès est sans frais en vertu des conditions de la licence suivante : Licence du gouvernement ouvert – Canada - http://ouvert.canada.ca/fr/licence-du-gouvernement-ouvert-canada. Sa source de données est le produit CanVec disponible via le site Gouvernement ouvert sous le titre Données topographiques du Canada - Série CanVec (https://ouvert.canada.ca/data/fr/dataset/8ba2aa2a-7bb9-4448-b4d7-f164409fe056)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Functional annotation of gene ontology using microarray data.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
LARa Version 02 is a freely accessible logistics-dataset for human activity recognition. In the ’Innovationlab Hybrid Services in Logistics’ at TU Dortmund University, two picking and one packing scenarios with 16 subjects were recorded using an optical marker-based Motion Capturing system (OMoCap), Inertial Measurement Units (IMUs), and an RGB camera. Each subject was recorded for one hour (960 minutes in total). All the given data have been labeled and categorised into eight activity classes and 19 binary coarse-semantic descriptions, also called attributes. In total, the dataset contains 221 unique attribute representations.
You can find the latest version of the annotation tool here: https://github.com/wilfer9008/Annotation_Tool_LARa
Upgrade:
If you use this dataset for research, please cite the following paper: “LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes”, Sensors 2020, DOI: 10.3390/s20154083.
If you use the Mbientlab Networks, please cite the following paper: “From Human Pose to On-Body Devices for Human-Activity Recognition”, 25th International Conference on Pattern Recognition (ICPR), 2021, DOI: 10.1109/ICPR48806.2021.9412283.
If you have any questions about the dataset, please contact friedrich.niemann@tu-dortmund.de.