21 datasets found
  1. Logistic Activity Recognition Challenge (LARa Version 02) – A Motion Capture...

    • zenodo.org
    zip
    Updated Jul 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Friedrich Niemann; Friedrich Niemann; Christopher Reining; Christopher Reining; Fernando Moya Rueda; Fernando Moya Rueda; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens; Gernot A. Fink; Gernot A. Fink; Michael ten Hompel; Michael ten Hompel; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens (2024). Logistic Activity Recognition Challenge (LARa Version 02) – A Motion Capture and Inertial Measurement Dataset [Dataset]. http://doi.org/10.5281/zenodo.5761276
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Friedrich Niemann; Friedrich Niemann; Christopher Reining; Christopher Reining; Fernando Moya Rueda; Fernando Moya Rueda; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens; Gernot A. Fink; Gernot A. Fink; Michael ten Hompel; Michael ten Hompel; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    LARa Version 02 is a freely accessible logistics-dataset for human activity recognition. In the ’Innovationlab Hybrid Services in Logistics’ at TU Dortmund University, two picking and one packing scenarios with 16 subjects were recorded using an optical marker-based Motion Capturing system (OMoCap), Inertial Measurement Units (IMUs), and an RGB camera. Each subject was recorded for one hour (960 minutes in total). All the given data have been labeled and categorised into eight activity classes and 19 binary coarse-semantic descriptions, also called attributes. In total, the dataset contains 221 unique attribute representations.

    You can find the latest version of the annotation tool here: https://github.com/wilfer9008/Annotation_Tool_LARa

    Upgrade:

    • Subject 15 and 16 added
    • OMoCap raw data added (c3d, csv)
    • Second IMU set added (MotionMiners Sensors)
    • OMoCap data: file names from subject 01 to subject 06 corrected
    • OMoCap data: additional annotated data added
    • OMoCap and IMU data (Mbientlab and MotionMiners Sensors): Annotation errors corrected
    • OMoCap Networks added (all for Window Size of 200 frames (1sec.))
      • tCNN_classes
      • tCNN-IMU_classes
      • tCNN_attrib
      • tCNN-IMU_attrib
    • Mbientlab Networks added (all for Window Size of 100 frames (1sec.))
      • tCNN_classes
      • tCNN-IMU_classes
      • tCNN_attrib
      • tCNN-IMU_attrib
    • Protocol extended (now README file)
    • List of unique attribute representations added (csv)

    If you use this dataset for research, please cite the following paper: “LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes”, Sensors 2020, DOI: 10.3390/s20154083.

    If you use the Mbientlab Networks, please cite the following paper: “From Human Pose to On-Body Devices for Human-Activity Recognition”, 25th International Conference on Pattern Recognition (ICPR), 2021, DOI: 10.1109/ICPR48806.2021.9412283.

    If you have any questions about the dataset, please contact friedrich.niemann@tu-dortmund.de.

  2. w

    Global Image Annotation Service Market Research Report: By Service Type...

    • wiseguyreports.com
    Updated Jul 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Image Annotation Service Market Research Report: By Service Type (Data Annotation, Image Enhancement, Image Segmentation, Object Detection, Image Classification), By Application (Automotive, Healthcare, Retail, Agriculture, Manufacturing), By Technology (Machine Learning, Deep Learning, Computer Vision, Natural Language Processing, Artificial Intelligence), By End-User Industry (E-commerce, Media and Entertainment, IT and Telecom, Transportation and Logistics, Education) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/image-annotation-service-market
    Explore at:
    Dataset updated
    Jul 23, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 7, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20235.22(USD Billion)
    MARKET SIZE 20245.9(USD Billion)
    MARKET SIZE 203215.7(USD Billion)
    SEGMENTS COVEREDService Type ,Application ,Technology ,End-User Industry ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSAI and ML advancements Selfdriving car technology Growing healthcare applications Increasing image content Automation and efficiency
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDScale AI ,Anolytics ,Sama ,Hive ,Keymakr ,Mighty AI ,Labelbox ,SuperAnnotate ,TaskUs ,Veritone ,Cogito Tech ,CloudFactory ,Appen ,Figure Eight ,Lionbridge AI
    MARKET FORECAST PERIOD2024 - 2032
    KEY MARKET OPPORTUNITIES1 Advancements in AI and ML 2 Rising demand from ecommerce 3 Growth in autonomous vehicles 4 Increasing focus on data privacy 5 Emergence of cloudbased annotation tools
    COMPOUND ANNUAL GROWTH RATE (CAGR) 13.01% (2024 - 2032)
  3. Data from: OpenPack: Public multi-modal dataset for packaging work...

    • zenodo.org
    • explore.openaire.eu
    zip
    Updated Nov 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Naoya Yoshimura; Jaime Morales; Takuya Maekawa; Naoya Yoshimura; Jaime Morales; Takuya Maekawa (2023). OpenPack: Public multi-modal dataset for packaging work recognition in logistics domain [Dataset]. http://doi.org/10.5281/zenodo.8145223
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 16, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Naoya Yoshimura; Jaime Morales; Takuya Maekawa; Naoya Yoshimura; Jaime Morales; Takuya Maekawa
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    OpenPack is an open access logistics-dataset for human activity recognition, which contains human movement and package information from 10 subjects in four scenarios. Human movement information is subdivided into three types of data, acceleration, physiological, and depth-sensing. The package information includes the size and number of items included in each packaging job.

    In the "Humanware laboratory" at IST Osaka University, with the supervision of industrial engineers, an experiment to mimic logistic center labor was designed. Workers with previous packaging experience performed a set of packaging tasks according to an instruction manual from a real-life logistics center. During the different scenarios, subjects were recorded while performing packing operations using Lidar, Kinect, and Realsense depth sensors while also wearing 4 IMU devices and 2 Empatica E4 wearable sensors. Besides sensor data, this dataset contains timestamp information collected from the handy terminal used to register product, packet, and address label codes as well as package details that can be useful to relate operations to specific packages.

    The 4 different scenarios include; sequential packing, mixed items collection, pre-ordered items, and time-sensitive stressors. Each of the subjects performed 20 packing jobs in a total of 5 work sessions for a total of 100 packing jobs. Approximately 50 hours of packaging operations have been labeled into 10 global operation classes and 16 sub-action classes for this dataset. Action classes are not unique to each operation but may only appear in one or two operations.

    Tutorial Dataset -> Preprocessed Dataset (IMU with Operation Labels)

    In this repository (Full Dataset), the data and label files are contained in separate files, we have received many comments that it was difficult to combine them. Therefore, for tutorial purposes, we have created a number of CSV files containing the four IMU's sensor data and the operation labels. These files are now included in this version as "preprocessed-IMU-with-operation-labels.zip".

    NOTE: Please be aware some operation labels have been slightly changed from those on version (v0.3.2) to correct annotation errors.

    Work is continuously being done to update and improve this dataset. When downloading and using this dataset please verify that the version is up to date with the latest release. The latest release [1.0.0] was uploaded on 14/07/2022. You can find information on how to use this dataset at: https://open-pack.github.io/

    We hosted an activity recognition competition using this dataset (OpenPack v0.3.x) awarded at a PerCom 2023 Workshop! The task was very simple: Recognize 10 work operations from the OpenPack dataset. You can refer to this website for coding materials relevant to this dataset. https://open-pack.github.io/challenge2022

  4. m

    Data from: UA_L-DoTT: University of Alabama's Large Dataset of Trains and...

    • data.mendeley.com
    Updated Feb 17, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maxwell Eastepp (2022). UA_L-DoTT: University of Alabama's Large Dataset of Trains and Trucks [Dataset]. http://doi.org/10.17632/982jbmh5h9.1
    Explore at:
    Dataset updated
    Feb 17, 2022
    Authors
    Maxwell Eastepp
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    UA_L-DoTT (University of Alabama’s Large Dataset of Trains and Trucks) is a collection of camera images and 3D LiDAR point cloud scans from five different data sites. Four of the data sites targeted trains on railways and the last targeted trucks on a four-lane highway. Low light conditions were present at one of the data sites showcasing unique differences between individual sensor data. The final data site utilized a mobile platform which created a large variety of view points in images and point clouds. The dataset consists of 93,397 raw images, 11,415 corresponding labeled text files, 354,334 raw point clouds, 77,860 corresponding labeled point clouds, and 33 timestamp files. These timestamps correlate images to point cloud scans via POSIX time. The data was collected with a sensor suite consisting of five different LiDAR sensors and a camera. This provides various viewpoints and features of the same targets due to the variance in operational characteristics of the sensors. The inclusion of both raw and labeled data allows users to get started immediately with the labeled subset, or label additional raw data as needed. This large dataset is beneficial to any researcher interested in machine learning using cameras, LiDARs, or both.

    The full dataset is too large (~1 Tb) to be uploaded to Mendeley Data. Please see the attached link for access to the full dataset.

  5. Logistic Activity Recognition Challenge (LARa Version 03) – A Motion Capture...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jul 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Friedrich Niemann; Friedrich Niemann; Christopher Reining; Christopher Reining; Fernando Moya Rueda; Fernando Moya Rueda; Nilah Ravi Nair; Nilah Ravi Nair; Philipp Oberdiek; Philipp Oberdiek; Hülya Bas; Raphael Spiekermann; Erik Altermann; Janine Anika Steffens; Gernot A. Fink; Gernot A. Fink; Michael ten Hompel; Michael ten Hompel; Hülya Bas; Raphael Spiekermann; Erik Altermann; Janine Anika Steffens (2024). Logistic Activity Recognition Challenge (LARa Version 03) – A Motion Capture and Inertial Measurement Dataset [Dataset]. http://doi.org/10.5281/zenodo.8189341
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Friedrich Niemann; Friedrich Niemann; Christopher Reining; Christopher Reining; Fernando Moya Rueda; Fernando Moya Rueda; Nilah Ravi Nair; Nilah Ravi Nair; Philipp Oberdiek; Philipp Oberdiek; Hülya Bas; Raphael Spiekermann; Erik Altermann; Janine Anika Steffens; Gernot A. Fink; Gernot A. Fink; Michael ten Hompel; Michael ten Hompel; Hülya Bas; Raphael Spiekermann; Erik Altermann; Janine Anika Steffens
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    LARa Version 03 is a freely accessible logistics-dataset for human activity recognition. In the “Innovationlab Hybrid Services in Logistics” at TU Dortmund University, two picking and one packing scenarios with 16 subjects were recorded using an optical marker-based Motion Capturing system (OMoCap), Inertial Measurement Units (IMUs), and an RGB camera. Each subject was recorded for one hour (960 minutes in total). All the given data have been labelled and categorised into eight activity classes and 19 binary coarse-semantic descriptions, also called attributes. In total, the dataset contains 221 unique attribute representations.

    The dataset was created according to the guideline of the following paper: “A Tutorial on Dataset Creation for Sensor-based Human Activity Recognition”, PerCom, 2023 DOI: 10.1109/PerComWorkshops56833.2023.10150401

    The LARa Version 03 contains a new Annotation tool for OMoCap and RGB Videos, namely, the Sequence Attribute Retrieval Annotator (SARA). SARA, developed and modified based on the LARa Version 02 annotation tool, includes desirable features and attempts to overcome limitations as found in the LARa annotation tool. Furthermore, few features were included based on the explorative study of previously developed annotation tools, see journal. In alignment with the LARa annotation tool, SARA focuses on OMoCap and video annotations. However, it is to be noted that SARA was not intended to be a video annotation tool with features such as subject tracking and multiple subject annotations. Here, the video is considered to be a supporting input to the OMoCap annotation. We would recommend other tools for pure video-based multiple-human activity annotation, including subject tracking, segmentation, and pose estimation. There are different ways of installing the annotation tool: Compiled binaries (executable files) for Windows and Mac can be directly downloaded from here. Python users can install the tool from https://pypi.org/project/annotation-tool/ (PyPi): “pip install annotation-tool”. For more information, please refer to the “Annotation Tool - Installation and User Manual”.

    Upgrade:

    • Annotation tool (SARA) added (for Windows and MacOS, including an installation and user manual)
    • Neural Networks updated (can be used with the annotation tool)
    • OMoCap data:
      • Annotation errors corrected
      • Annotations reformatted, fitting the SARA annotation tool
      • “additional annotated data” extended
      • “Markers_Exports” added
    • IMU data (MbientLab and MotionMiners Sensors)
      • Annotation errors corrected
    • README file (protocol) updated and extended

    If you use this dataset for research, please cite the following paper: “LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes”, Sensors 2020, DOI: 10.3390/s20154083.

    If you use the Mbientlab Networks, please cite the following paper: “From Human Pose to On-Body Devices for Human-Activity Recognition”, 25th International Conference on Pattern Recognition (ICPR), 2021, DOI: 10.1109/ICPR48806.2021.9412283.

    For any questions about the dataset, please contact Friedrich Niemann at friedrich.niemann@tu-dortmund.de.

  6. t

    NOTATION C.O. PORT LOGISTICS GROUP|Full export Customs Data...

    • tradeindata.com
    Updated Apr 25, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tradeindata (2016). NOTATION C.O. PORT LOGISTICS GROUP|Full export Customs Data Records|tradeindata [Dataset]. https://www.tradeindata.com/supplier_detail/?id=c413b168ea8aad417831bbc4abbd708b
    Explore at:
    Dataset updated
    Apr 25, 2016
    Dataset authored and provided by
    tradeindata
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Customs records of are available for NOTATION C.O. PORT LOGISTICS GROUP. Learn about its Importer, supply capabilities and the countries to which it supplies goods

  7. Z

    Sensor-based Pallet Activity Recognition in Logistics (SPARL Version 2) - A...

    • data.niaid.nih.gov
    Updated Nov 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kuhlmann, Jean Lenard (2024). Sensor-based Pallet Activity Recognition in Logistics (SPARL Version 2) - A multi-modal Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11280958
    Explore at:
    Dataset updated
    Nov 18, 2024
    Dataset provided by
    Reining, Christopher
    Schorning, Kirsten
    Kirchheim, Alice
    Kuhlmann, Jean Lenard
    Brandt, Marc Julian
    Olivier, Marie-Claire
    Franke, Sven
    Bommert, Andrea
    Roidl, Moritz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SPARL is a freely accessible data set for sensor-based activity recognition of pallets in logistics. The data set consists of 20 recordings from three scenarios. A description of the scenarios can be found in the protocol file.

    Four different sensors were used simultaneously for all recordings:

    MSR Electronics MSR 145

    Sampling rate 50 Hz

    MBIENTLAB MetaMotionS

    Sampling rate 100 Hz

    Kistler KiDaQ Module 5512A

    Sampling rate 100 kHz

    the raw data is also downsampled to 5 kHz and 20 kHz for easier processing

    Holybro Flightcontroller PX4FMU

    The board uses two accelerometers and two gyroscopes, all with a sampling rate of 1000 Hz

    Accelerometer 1: IvenSense MPU6000

    Accelerometer 2: STMicroelectronics LSM303D

    Gyroscope 1: IvenSense MPU6000

    Gyroscope 2: STMicroelectronics L3GD20

    The recordings were accompanied by three logitech Mevo Start cameras, of which all recordings are included anonymously in the data set.

    The videos were annotated by one person in each frame. For this purpose, the annotation tool SARA was used, which can be found here. The JSON schema used for annotation is also included in the SPARL dataset. The R code used our evaluation can be found in GitHub.

    If you have any questions about the dataset, please contact: sven.franke@tu-dortmund.de

  8. d

    10K+ Package Images | AI Training Data | Annotated imagery data for AI |...

    • datarade.ai
    Updated Sep 6, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Seeds (2018). 10K+ Package Images | AI Training Data | Annotated imagery data for AI | Object & Scene Detection | Global Coverage [Dataset]. https://datarade.ai/data-providers/data-seeds/data-products/10k-package-images-ai-training-data-annotated-imagery-da-data-seeds
    Explore at:
    .bin, .csv, .json, .sql, .txt, .xls, .xmlAvailable download formats
    Dataset updated
    Sep 6, 2018
    Dataset authored and provided by
    Data Seeds
    Area covered
    Guadeloupe, Maldives, Finland, Sierra Leone, Mali, Holy See, Slovenia, Guinea-Bissau, Equatorial Guinea, Malawi
    Description

    This dataset features over 10,000 high-quality images of packages sourced from photographers worldwide. Designed to support AI and machine learning applications, it provides a diverse and richly annotated collection of package imagery.

    Key Features: 1. Comprehensive Metadata The dataset includes full EXIF data, detailing camera settings such as aperture, ISO, shutter speed, and focal length. Additionally, each image is pre-annotated with object and scene detection metadata, making it ideal for tasks like classification, detection, and segmentation. Popularity metrics, derived from engagement on our proprietary platform, are also included.

    1. Unique Sourcing Capabilities The images are collected through a proprietary gamified platform for photographers. Competitions focused on package photography ensure fresh, relevant, and high-quality submissions. Custom datasets can be sourced on-demand within 72 hours, allowing for specific requirements such as packaging types (e.g., boxes, envelopes, branded parcels) or environmental settings (e.g., in transit, on doorsteps, in warehouses) to be met efficiently.

    2. Global Diversity Photographs have been sourced from contributors in over 100 countries, ensuring a wide variety of packaging designs, shipping labels, languages, and handling conditions. The images cover diverse contexts, including retail shelves, delivery trucks, homes, and distribution centers, offering a comprehensive view of real-world packaging scenarios.

    3. High-Quality Imagery The dataset includes images with resolutions ranging from standard to high-definition to meet the needs of various projects. Both professional and amateur photography styles are represented, offering a mix of artistic and functional perspectives suitable for a variety of applications.

    4. Popularity Scores Each image is assigned a popularity score based on its performance in GuruShots competitions. This unique metric reflects how well the image resonates with a global audience, offering an additional layer of insight for AI models focused on user preferences or engagement trends.

    5. AI-Ready Design This dataset is optimized for AI applications, making it ideal for training models in tasks such as package recognition, logistics automation, label detection, and condition analysis. It is compatible with a wide range of machine learning frameworks and workflows, ensuring seamless integration into your projects.

    6. Licensing & Compliance The dataset complies fully with data privacy regulations and offers transparent licensing for both commercial and academic use.

    Use Cases: 1. Training computer vision systems for package identification and tracking. 2. Enhancing logistics and supply chain AI models with real-world packaging visuals. 3. Supporting robotics and automation workflows in warehousing and delivery environments. 4. Developing datasets for augmented reality, retail shelf analysis, or smart delivery applications.

    This dataset offers a comprehensive, diverse, and high-quality resource for training AI and ML models, tailored to deliver exceptional performance for your projects. Customizations are available to suit specific project needs. Contact us to learn more!

  9. m

    Multi-instance vehicle dataset with annotations captured in outdoor diverse...

    • data.mendeley.com
    Updated Mar 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wasiq Khan (2023). Multi-instance vehicle dataset with annotations captured in outdoor diverse settings [Dataset]. http://doi.org/10.17632/5d8k5bkb93.2
    Explore at:
    Dataset updated
    Mar 7, 2023
    Authors
    Wasiq Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We collected and annotated a dataset containing 105,544 annotated vehicle instances from 24700 image frames within seven different videos, sourced online under creative commons license. The video frames are annotated using DarkLabel tool. In the interest of reusability and generalisation of the deep learning model, we consider the diversity within the collected dataset. This diversity includes changes of lighting amongst the video, as well as other factors such as weather conditions, angle of observation, varying speed of the moving vehicles, traffic flow, and road conditions etc. The videos collected obviously include stationary vehicles, to perform the validation of stopped vehicle detection method. It can be noticed that the road conditions (e.g., motorways, city, country roads), directions, data capture timings and camera views, vary in the dataset producing annotated dataset with diversity. the dataset may have several uses such as vehicle detection, vehicle identification, stopped vehicle detection on smart motorways and local roads (smart city applications) and many more.

  10. c

    Open Transport Data, 2018

    • datacatalogue.cessda.eu
    Updated Nov 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natvig, Marit Kjøsnes (2024). Open Transport Data, 2018 [Dataset]. http://doi.org/10.18712/NSD-NSD2753-V3
    Explore at:
    Dataset updated
    Nov 27, 2024
    Dataset provided by
    SINTEF
    Authors
    Natvig, Marit Kjøsnes
    Time period covered
    Jan 4, 2017 - Oct 1, 2018
    Variables measured
    OrganizationOrInstitution
    Description

    The EU directives PSI and Inspire state that public data must be shared and the ITS directive requires open transport data. The purpose is to promote innovation in new services that society needs. But development is slow. There are many barriers to opening data. At the same time, finding and using open data is challenging. Knowledge is needed about how data should be opened to make it easy to find, understand and use the data.

    The project has mapped barriers and success factors related to the use of open data, and tested various tools and solutions for publishing and using data. Based on this, advice has been drawn up on how open data should be published and used. The councils include: use of metadata, documentation, APIs and licenses. The tips are summarized here: http://opendatalab.no/

    Experiments have also been carried out with automatic annotation (metadata registration) and semantic search for open data. The results show that this can work well if the data sets have good documentation in natural language.

    Data are freely available for downloading after 01.06.2020.

  11. d

    Annotated Bird Observations - Aniakchak Coast - June 17-Aug 30, 1988.

    • datadiscoverystudio.org
    Updated Jun 8, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Annotated Bird Observations - Aniakchak Coast - June 17-Aug 30, 1988. [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/dc7b1644cb884a928e48b1886b6bb174/html
    Explore at:
    Dataset updated
    Jun 8, 2018
    Description

    description: This is a report detailing over 2 months worth of avian field work done by park service personnel on the Aniakchak coast. The report includes an annoted bird list, bald eagle nest cards, handwritten description of efforts and logistics, and weather data. There are also field records correlating bird species with dates viewed.; abstract: This is a report detailing over 2 months worth of avian field work done by park service personnel on the Aniakchak coast. The report includes an annoted bird list, bald eagle nest cards, handwritten description of efforts and logistics, and weather data. There are also field records correlating bird species with dates viewed.

  12. G

    Atlas of Canada, Northern Geodatabase (GDB)

    • open.canada.ca
    • gimi9.com
    • +2more
    zip
    Updated Mar 14, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natural Resources Canada (2022). Atlas of Canada, Northern Geodatabase (GDB) [Dataset]. https://open.canada.ca/data/en/dataset/702ebdea-39ff-50e4-ab5f-de1150d16b7a
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 14, 2022
    Dataset provided by
    Natural Resources Canada
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Area covered
    Canada
    Description

    The Northern Canada geodatabase contains a selection of the data from the Atlas of Canada Reference Map - Northern Canada / Nord du Canada (MCR 36). The geodatabase is comprised of two feature data sets (annotation and geometry), and the shaded relief. The annotation feature dataset comprises the annotation feature classes. All annotation feature classes were derived for MCR 36 and all text placements are based on the font type and size used for the reference map. The geometry feature dataset is comprised of data for: boundaries, roads, railways, airports, seaplane bases, ports, populated places, rivers, lakes, mines, oil/natural gas fields, hydroelectric generating stations, federal protected areas, ice shelves, permanent polar sea ice limit and the treeline. The geodatabase can be downloaded as feature data sets or as shapefiles.

  13. m

    Poribohon-BD

    • data.mendeley.com
    Updated Oct 1, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaira Tabassum (2020). Poribohon-BD [Dataset]. http://doi.org/10.17632/pwyyg8zmk5.2
    Explore at:
    Dataset updated
    Oct 1, 2020
    Authors
    Shaira Tabassum
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Poribohon-BD is a vehicle dataset of 15 native vehicles of Bangladesh. The vehicles are: i) Bicycle, ii) Boat, iii) Bus, iv) Car, v) CNG, vi) Easy-bike, vii) Horse-cart, viii) Launch, ix) Leguna, x) Motorbike, xi) Rickshaw, xii) Tractor, xiii) Truck, xiv) Van, xv) Wheelbarrow. The dataset contains a total of 9058 images with a high diversity of poses, angles, lighting conditions, weather conditions, backgrounds. All of the images are in JPG format. The dataset also contains 9058 image annotation files. These files state the exact positions of the objects with labels in the corresponding image. The annotation has been performed manually and the annotated values are stored in XML files. LabelImg tool by Tzuta Lin has been used to label the images. Moreover, data augmentation techniques have been applied to keep the number of images comparable to each type of vehicle. Human faces have also been blurred to maintain privacy and confidentiality. The data files are divided into 15 individual folders. Each folder contains images and annotation files of one vehicle type. The 16th folder titled ‘Multi-class Vehicles’ contains images and annotation files of different types of vehicles. Poribohon-BD is compatible with various CNN architectures such as YOLO, VGG-16, R-CNN, DPM.

  14. w

    Global Video Annotation Service Market Research Report: By Annotation Type...

    • wiseguyreports.com
    Updated Aug 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Video Annotation Service Market Research Report: By Annotation Type (Image Annotation, Video Annotation, Text Annotation, Audio Annotation), By Application (Training Artificial Intelligence (AI), Object Detection and Recognition, Data Analytics, Medical Imaging, Security and Surveillance), By Deployment Mode (On-premise, Cloud-based), By Industry Vertical (Transportation and Logistics, Healthcare, Retail, Media and Entertainment, Manufacturing) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/cn/reports/video-annotation-service-market
    Explore at:
    Dataset updated
    Aug 10, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 8, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 202312.11(USD Billion)
    MARKET SIZE 202414.37(USD Billion)
    MARKET SIZE 203256.6(USD Billion)
    SEGMENTS COVEREDAnnotation Type ,Application ,Deployment Mode ,Industry Vertical ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICS1 Rising Demand for AIDriven Applications 2 Growing Adoption of Video Content 3 Advancements in Annotation Tools and Techniques 4 Increasing Focus on Data Quality 5 Government Initiatives and Regulations
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDLionbridge AINewparaScale AINewparaTagilo Inc.NewparaThe Labelbox ,Toloka ,Xilyxe ,Keymakr ,Wayfair ,CloudFactory ,Hive.ai (formerly SmartPixels) ,Dataloop ,Wide
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESAutomated data labeling Object detection and tracking AI model training
    COMPOUND ANNUAL GROWTH RATE (CAGR) 18.69% (2025 - 2032)
  15. TAMPAR: Visual Tampering Detection for Parcels Logistics in Postal Supply...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Nov 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Naumann; Alexander Naumann; Felix Hertlein; Laura Dörr; Kai Furmans; Felix Hertlein; Laura Dörr; Kai Furmans (2023). TAMPAR: Visual Tampering Detection for Parcels Logistics in Postal Supply Chains [Dataset]. http://doi.org/10.5281/zenodo.10057090
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexander Naumann; Alexander Naumann; Felix Hertlein; Laura Dörr; Kai Furmans; Felix Hertlein; Laura Dörr; Kai Furmans
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    TAMPAR is a real-world dataset of parcel photos for tampering detection with annotations in COCO format. For details see our paper and for visual samples our project page. Features are:

    • >900 annotated real-world images with >2,700 visible parcel side surfaces
    • 6 different tampering types
    • 6 different distortion strengths

    Relevant computer vision tasks:

    • bounding box detection
    • classification
    • instance segmentation
    • keypoint estimation
    • tampering detection and classification

    If you use this resource for scientific research, please consider citing our WACV 2024 paper "TAMPAR: Visual Tampering Detection for Parcel Logistics in Postal Supply Chains".

  16. Parcel3D - A Synthetic Dataset of Damaged and Intact Parcel Images with 2D...

    • zenodo.org
    • explore.openaire.eu
    • +1more
    zip
    Updated Jul 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Naumann; Alexander Naumann; Felix Hertlein; Felix Hertlein; Laura Dörr; Laura Dörr; Kai Furmans; Kai Furmans (2023). Parcel3D - A Synthetic Dataset of Damaged and Intact Parcel Images with 2D and 3D Annotations [Dataset]. http://doi.org/10.5281/zenodo.8032204
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexander Naumann; Alexander Naumann; Felix Hertlein; Felix Hertlein; Laura Dörr; Laura Dörr; Kai Furmans; Kai Furmans
    Description

    Synthetic dataset of over 13,000 images of damaged and intact parcels with full 2D and 3D annotations in the COCO format. For details see our paper and for visual samples our project page.


    Relevant computer vision tasks:

    • bounding box detection
    • classification
    • instance segmentation
    • keypoint estimation
    • 3D bounding box estimation
    • 3D voxel reconstruction
    • 3D reconstruction

    The dataset is for academic research use only, since it uses resources with restrictive licenses.
    For a detailed description of how the resources are used, we refer to our paper and project page.

    Licenses of the resources in detail:

    You can use our textureless models (i.e. the obj files) of damaged parcels under CC BY 4.0 (note that this does not apply to the textures).

    If you use this resource for scientific research, please consider citing

    @inproceedings{naumannParcel3DShapeReconstruction2023,
      author  = {Naumann, Alexander and Hertlein, Felix and D\"orr, Laura and Furmans, Kai},
      title   = {Parcel3D: Shape Reconstruction From Single RGB Images for Applications in Transportation Logistics},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
      month   = {June},
      year   = {2023},
      pages   = {4402-4412}
    }
  17. Parcel2D Real - A real-world image dataset of cuboid-shaped parcels with 2D...

    • zenodo.org
    zip
    Updated Jul 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Naumann; Alexander Naumann; Felix Hertlein; Felix Hertlein; Benchun Zhou; Benchun Zhou; Laura Dörr; Laura Dörr; Kai Furmans; Kai Furmans (2023). Parcel2D Real - A real-world image dataset of cuboid-shaped parcels with 2D and 3D annotations [Dataset]. http://doi.org/10.5281/zenodo.8031971
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexander Naumann; Alexander Naumann; Felix Hertlein; Felix Hertlein; Benchun Zhou; Benchun Zhou; Laura Dörr; Laura Dörr; Kai Furmans; Kai Furmans
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Real-world dataset of ~400 images of cuboid-shaped parcels with full 2D and 3D annotations in the COCO format.

    Relevant computer vision tasks:

    • bounding box detection
    • instance segmentation
    • keypoint estimation
    • 3D bounding box estimation
    • 3D voxel reconstruction (.binvox files)
    • 3D reconstruction (.obj files)

    For details, see our paper and project page.

    If you use this resource for scientific research, please consider citing

    @inproceedings{naumannScrapeCutPasteLearn2022,
      title    = {Scrape, Cut, Paste and Learn: Automated Dataset Generation Applied to Parcel Logistics},
      author    = {Naumann, Alexander and Hertlein, Felix and Zhou, Benchun and Dörr, Laura and Furmans, Kai},
      booktitle  = {{{IEEE Conference}} on {{Machine Learning}} and Applications ({{ICMLA}})},
      date     = 2022
    }

  18. Z

    MotionMiners Missplacement Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dönnebrink, Robin (2024). MotionMiners Missplacement Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8272090
    Explore at:
    Dataset updated
    Jan 24, 2024
    Dataset provided by
    Moya Rueda, Fernando
    Dönnebrink, Robin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The MotionMiners Miss-placement Dataset 𝑀𝑃1 is composed of recordings of seven subjects carrying out different activities in the intralogistics, using a sensor set-up of On-Body Devices (OBDs) for industrial applications. Here, the position and orientation of the OBD change with respect to the recording-and-usage guidelines. The OBDs are labeled with respect to their expected location on the human body, namely, 𝑂𝐵𝐷𝑅 , 𝑂𝐵𝐷𝐿 and 𝑂𝐵𝐷𝑇 on the right arm, left arm, and frontal torso. Tab. 1 (see manuscript) presents the different miss-placement classes of the dataset. This dataset considers the miss-placement as a classification problem; however, differently, the 𝑀𝑃 dataset considers rotations miss-placements—commonly appear on deployment from practitioners experience. The 𝑀𝑃 dataset contains recordings of seven subjects performing six activities: Standing, Walking, Handling Centred, Handling Upwards, Handling Downwards, and an additional Synchronisation. Each subject carried out each activity under the case of up to 15 different miss-placement situations (soon updating to 20 different miss-placement situations), including a correct set-up of the devices. The 𝑀𝑃 dataset is divided in two subsets, 𝑀𝑃_A and 𝑀𝑃_B. Each recording of a subject contains:

    raw data of Acc, Gyr, and Mag in 3D for a certain number of samples, making a matrix of size [Samples times 27] annotated data of Acc, Gyr, and Mag in 3D for a certain number of samples, making a matrix of size [Samples, Act class, [27 channels]]

    for MP_B, it includes the synchronized recording of the correct sensor set-up, so the matrix becomes [Samples, class, [27 channels of the miss-placed setup], [27 channels of the correct set up]] the miss-placement annotations [Samples, Miss-placement class] the activity annotations [Samples, activity class, [19 semantic attributes]]

    the semantic attributes are given following the following paper: "LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes", Sensors 2020, DOI: 10.3390/s20154083. If you use this dataset for research, please cite the following paper: "Miss-placement Prediction of Multiple On-body Devices for Human Activity Recognition", Sensors 2020, DOI: 10.1145/3615834.3615838. For any questions about the dataset, please contact Fernando Moya Rueda at fernando.moya@motionminers.com.

  19. a

    Carte de base du Canada - Transport: annotations et géométrie, projections...

    • catalogue.arctic-sdi.org
    Updated Mar 25, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Carte de base du Canada - Transport: annotations et géométrie, projections conique conforme de Lambert (ESPG: 3978) [Dataset]. https://catalogue.arctic-sdi.org/geonetwork/srv/search?keyword=geobase
    Explore at:
    Dataset updated
    Mar 25, 2021
    Area covered
    Canada
    Description

    Carte de base du Canada - Transport (CBCT). Ce service de cartographie Web offre un contexte de référence spatiale axé sur les réseaux de transport. Il est particulièrement conçu pour être utilisé comme fond de carte dans une application cartographique Web ou un système d'information géographique (SIG). L'accès est sans frais en vertu des conditions de la licence suivante : Licence du gouvernement ouvert – Canada - http://ouvert.canada.ca/fr/licence-du-gouvernement-ouvert-canada. Sa source de données est le produit CanVec disponible via le site Gouvernement ouvert sous le titre Données topographiques du Canada - Série CanVec (https://ouvert.canada.ca/data/fr/dataset/8ba2aa2a-7bb9-4448-b4d7-f164409fe056)

  20. f

    Functional annotation of gene ontology using microarray data.

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tadashi Moro; Sachie Nakao; Hideaki Sumiyoshi; Takamasa Ishii; Masaki Miyazawa; Naoaki Ishii; Tadayuki Sato; Yumi Iida; Yoshinori Okada; Masayuki Tanaka; Hideki Hayashi; Satoshi Ueha; Kouji Matsushima; Yutaka Inagaki (2023). Functional annotation of gene ontology using microarray data. [Dataset]. http://doi.org/10.1371/journal.pone.0146592.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tadashi Moro; Sachie Nakao; Hideaki Sumiyoshi; Takamasa Ishii; Masaki Miyazawa; Naoaki Ishii; Tadayuki Sato; Yumi Iida; Yoshinori Okada; Masayuki Tanaka; Hideki Hayashi; Satoshi Ueha; Kouji Matsushima; Yutaka Inagaki
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Functional annotation of gene ontology using microarray data.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Friedrich Niemann; Friedrich Niemann; Christopher Reining; Christopher Reining; Fernando Moya Rueda; Fernando Moya Rueda; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens; Gernot A. Fink; Gernot A. Fink; Michael ten Hompel; Michael ten Hompel; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens (2024). Logistic Activity Recognition Challenge (LARa Version 02) – A Motion Capture and Inertial Measurement Dataset [Dataset]. http://doi.org/10.5281/zenodo.5761276
Organization logo

Logistic Activity Recognition Challenge (LARa Version 02) – A Motion Capture and Inertial Measurement Dataset

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Jul 17, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Friedrich Niemann; Friedrich Niemann; Christopher Reining; Christopher Reining; Fernando Moya Rueda; Fernando Moya Rueda; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens; Gernot A. Fink; Gernot A. Fink; Michael ten Hompel; Michael ten Hompel; Hülya Bas; Erik Altermann; Nilah Ravi Nair; Janine Anika Steffens
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

LARa Version 02 is a freely accessible logistics-dataset for human activity recognition. In the ’Innovationlab Hybrid Services in Logistics’ at TU Dortmund University, two picking and one packing scenarios with 16 subjects were recorded using an optical marker-based Motion Capturing system (OMoCap), Inertial Measurement Units (IMUs), and an RGB camera. Each subject was recorded for one hour (960 minutes in total). All the given data have been labeled and categorised into eight activity classes and 19 binary coarse-semantic descriptions, also called attributes. In total, the dataset contains 221 unique attribute representations.

You can find the latest version of the annotation tool here: https://github.com/wilfer9008/Annotation_Tool_LARa

Upgrade:

  • Subject 15 and 16 added
  • OMoCap raw data added (c3d, csv)
  • Second IMU set added (MotionMiners Sensors)
  • OMoCap data: file names from subject 01 to subject 06 corrected
  • OMoCap data: additional annotated data added
  • OMoCap and IMU data (Mbientlab and MotionMiners Sensors): Annotation errors corrected
  • OMoCap Networks added (all for Window Size of 200 frames (1sec.))
    • tCNN_classes
    • tCNN-IMU_classes
    • tCNN_attrib
    • tCNN-IMU_attrib
  • Mbientlab Networks added (all for Window Size of 100 frames (1sec.))
    • tCNN_classes
    • tCNN-IMU_classes
    • tCNN_attrib
    • tCNN-IMU_attrib
  • Protocol extended (now README file)
  • List of unique attribute representations added (csv)

If you use this dataset for research, please cite the following paper: “LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes”, Sensors 2020, DOI: 10.3390/s20154083.

If you use the Mbientlab Networks, please cite the following paper: “From Human Pose to On-Body Devices for Human-Activity Recognition”, 25th International Conference on Pattern Recognition (ICPR), 2021, DOI: 10.1109/ICPR48806.2021.9412283.

If you have any questions about the dataset, please contact friedrich.niemann@tu-dortmund.de.

Search
Clear search
Close search
Google apps
Main menu