92 datasets found
  1. R

    Multi Object Tracking Dataset

    • universe.roboflow.com
    zip
    Updated Nov 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dronespace (2023). Multi Object Tracking Dataset [Dataset]. https://universe.roboflow.com/dronespace/multi-object-tracking-yemuq
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 18, 2023
    Dataset authored and provided by
    Dronespace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cars Bus Van Truck Pedestrian Bounding Boxes
    Description

    Multi Object Tracking

    ## Overview
    
    Multi Object Tracking is a dataset for object detection tasks - it contains Cars Bus Van Truck Pedestrian annotations for 10,004 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. P

    PersonPath22 Dataset

    • paperswithcode.com
    • registry.opendata.aws
    Updated Sep 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). PersonPath22 Dataset [Dataset]. https://paperswithcode.com/dataset/personpath22
    Explore at:
    Dataset updated
    Sep 5, 2024
    Description

    PersonPath22 is a large-scale multi-person tracking dataset containing 236 videos captured mostly from static-mounted cameras, collected from sources where we were given the rights to redistribute the content and participants have given explicit consent. Each video has ground-truth annotations including both bounding boxes and tracklet-ids for all the persons in each frame.

  3. E

    Data from: Example videos for multi-object tracking

    • edmond.mpg.de
    mp4, rtf
    Updated Feb 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Angela Albi; Tristan Walter; Daniele Carlesso; Angela Albi; Tristan Walter; Daniele Carlesso (2025). Example videos for multi-object tracking [Dataset]. http://doi.org/10.17617/3.7F5MGE
    Explore at:
    mp4(3758655543), mp4(970990356), rtf(5149), mp4(6247567472), mp4(4449562077), rtf(1574), mp4(1205419348), mp4(800638476), mp4(3638708741)Available download formats
    Dataset updated
    Feb 21, 2025
    Dataset provided by
    Edmond
    Authors
    Angela Albi; Tristan Walter; Daniele Carlesso; Angela Albi; Tristan Walter; Daniele Carlesso
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains video data used both for the development of the TRex (https://trex.run/) tracking software and for teaching its use. The filenames use the convention project_YYYYMMDD_N Where: - project stands for the project name (e.g. hexbug). In the locusts video set, the suffix after the dash (-noqr, -qr, -mix) indicates whether individuals are tagged with Aruco markers. - N stands for the number of filmed individuals Video Set 1: guppies 1. guppy_20200727_8.mp4 This is a video of 6:40 minutes of 8 guppies (Poecilia reticulata). The video was originally recorded at 30 frames per second but the playback in this file is set to 25 frames per second. Video Set 2: hexbugs 2. hexbug_20250129_5.mp4 This is a video of 2:47 minutes of 5 hexbugs. The video was originally recorded at 50 frames per second but the playback in this file is set to 30 frames per second. Video Set 3: locusts 3. locusts-noqr_20250117_5.mp4 This is a video of 1:35 minutes of 5 locusts (Schistocerca gregaria). The video was originally recorded at 5 frames per second but the playback in this file is set to 30 frames per second. Locusts were not tagged with an aruco marker. 4. locusts-noqr_20250117_15.mp4 This is a video of 2:32 minutes of 15 locusts (Schistocerca gregaria). The video was originally recorded at 5 frames per second but the playback in this file is set to 30 frames per second. Locusts were not tagged with an aruco marker. 5. locusts-noqr_20250206_5.mp4 This is a video of 3:32 minutes of 5 locusts (Schistocerca gregaria). The video was originally recorded at 5 frames per second but the playback in this file is set to 25 frames per second. Locusts were not tagged with an aruco marker. 6. locusts-mix_20250206_10.mp4 This is a video of 3:07 minutes of 10 locusts (Schistocerca gregaria). The video was originally recorded at 5 frames per second but the playback in this file is set to 25 frames per second. Five out of ten locusts were tagged with an aruco marker. 7. locusts-qr_20250206_15.mp4 This is a video of 3:00 minutes of 15 locusts (Schistocerca gregaria). The video was originally recorded at 5 frames per second but the playback in this file is set to 25 frames per second. All locusts were tagged with an aruco marker. The videos were collected in the facilities of the Department of Collective Behavior at the Max Planck Institute of Animal Behavior and the Centre for the Advanced Study of Collective Behaviour (CASCB) at the University of Konstanz. Konstanz, Germany. For more details, please read the ACKNOWLEDGMENTS and METHODS files.

  4. 4,001 People Single Object Multi-view Tracking Data

    • m.nexdata.ai
    Updated Oct 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). 4,001 People Single Object Multi-view Tracking Data [Dataset]. https://m.nexdata.ai/datasets/computervision/1231
    Explore at:
    Dataset updated
    Oct 5, 2023
    Dataset authored and provided by
    Nexdata
    Variables measured
    Device, Accuracy, Data size, Data format, Data diversity, Age distribution, Race distribution, Annotation content, Gender distribution, Collecting environment
    Description

    4,001 People Single Object Multi-view Tracking Data, the data collection site includes indoor and outdoor scenes (such as supermarket, mall and community, etc.) , where each subject appeared in at least 7 cameras. The data diversity includes different ages, different time periods, different cameras, different human body orientations and postures, different collecting scenes. It can be used for computer vision tasks such as object detection and object tracking in multi-view scenes.

  5. R

    Office Person Tracking Dataset

    • universe.roboflow.com
    zip
    Updated Feb 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Trial YOLO (2025). Office Person Tracking Dataset [Dataset]. https://universe.roboflow.com/trial-yolo/office-person-tracking/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 19, 2025
    Dataset authored and provided by
    Trial YOLO
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Person Bounding Boxes
    Description

    Office Person Tracking

    ## Overview
    
    Office Person Tracking is a dataset for object detection tasks - it contains Person annotations for 1,129 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. 212 People – 48,000 Images of Multi-person and Multi-view Tracking Data

    • m.nexdata.ai
    • nexdata.ai
    Updated Jun 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2024). 212 People – 48,000 Images of Multi-person and Multi-view Tracking Data [Dataset]. https://m.nexdata.ai/datasets/computervision/1191?source=Github
    Explore at:
    Dataset updated
    Jun 21, 2024
    Dataset authored and provided by
    Nexdata
    Variables measured
    Device, Accuracy, Data size, Data format, Data diversity, Annotation content, Collecting environment, Population distribution
    Description

    212 People – 48,000 Images of Multi-person and Multi-view Tracking Data. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different shooting angles, different human body orientations and postures. For annotation, we adpoted rectangular bounding boxes annotations on human body. This dataset can be used for multiple object tracking and other tasks.

  7. f

    Drone-Person Tracking in Uniform Appearance Crowd (D-PTUAC)

    • figshare.com
    bin
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamad Alansari; Oussama Abdulhay; Sara Alansari; Sajid Javed; Abdulhadi Shoufan; Yahya Zweiri; Naoufel Werghi (2023). Drone-Person Tracking in Uniform Appearance Crowd (D-PTUAC) [Dataset]. http://doi.org/10.6084/m9.figshare.24590568.v2
    Explore at:
    binAvailable download formats
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    figshare
    Authors
    Mohamad Alansari; Oussama Abdulhay; Sara Alansari; Sajid Javed; Abdulhadi Shoufan; Yahya Zweiri; Naoufel Werghi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Drone-person tracking in uniform appearance crowds poses unique challenges due to the difficulty in distinguishing individuals with similar attire and multi-scale variations. To address this issue and facilitate the development of effective tracking algorithms, we present a novel dataset named D-PTUAC (Drone-Person Tracking in Uniform Appearance Crowd). The dataset comprises 138 sequences comprising over 121K frames, each manually annotated with bounding boxes and attributes. During dataset creation, we carefully consider 17 challenging attributes encompassing a wide range of viewpoints and scene complexities. These attributes are annotated to facilitate the analysis of performance based on specific attributes. Extensive experiments are conducted using 44 state-of-the-art (SOTA) trackers, and the performance gap demonstrate the need for a dedicated end-to-end aerial visual object tracker that accounts the inherent properties of aerial environment.

  8. MOT15 Challenge Dataset

    • kaggle.com
    Updated May 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Rasel sarker (2025). MOT15 Challenge Dataset [Dataset]. https://www.kaggle.com/datasets/mdraselsarker/mot15-challenge-dataset/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 9, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Md. Rasel sarker
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    The MOT AI Dataset is a high-quality, large-scale dataset designed for evaluating multi-object tracking algorithms, specifically targeting pedestrian tracking in challenging urban environments. This dataset includes multiple video sequences that contain high-resolution frames, along with ground-truth annotations for pedestrian bounding boxes, object IDs, and visibility over time. The dataset is annotated to capture various real-world challenges, including occlusions, crowded environments, and partial visibility, which makes it ideal for testing tracking performance in complex scenarios. The MOT AI dataset serves as a key benchmark for researchers developing algorithms in computer vision, deep learning, and multi-object tracking. It is widely used for evaluating the robustness and accuracy of tracking methods across diverse environments, ensuring reliable performance in practical applications.

  9. P

    DanceTrack Dataset

    • paperswithcode.com
    Updated Feb 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peize Sun; Jinkun Cao; Yi Jiang; Zehuan Yuan; Song Bai; Kris Kitani; Ping Luo (2022). DanceTrack Dataset [Dataset]. https://paperswithcode.com/dataset/dancetrack
    Explore at:
    Dataset updated
    Feb 27, 2022
    Authors
    Peize Sun; Jinkun Cao; Yi Jiang; Zehuan Yuan; Song Bai; Kris Kitani; Ping Luo
    Description

    A large-scale multi-object tracking dataset for human tracking in occlusion, frequent crossover, uniform appearance and diverse body gestures. It is proposed to emphasize the importance of motion analysis in multi-object tracking instead of mainly appearance-matching-based diagram.

  10. P

    MPHOI-72 Dataset

    • paperswithcode.com
    • opendatalab.com
    Updated Jul 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanqiu Qiao; Qianhui Men; Frederick W. B. Li; Yoshiki Kubotani; Shigeo Morishima; Hubert P. H. Shum (2022). MPHOI-72 Dataset [Dataset]. https://paperswithcode.com/dataset/mphoi-72
    Explore at:
    Dataset updated
    Jul 18, 2022
    Authors
    Tanqiu Qiao; Qianhui Men; Frederick W. B. Li; Yoshiki Kubotani; Shigeo Morishima; Hubert P. H. Shum
    Description

    MPHOI-72 is a multi-person human-object interaction dataset that can be used for a wide variety of HOI/activity recognition and pose estimation/object tracking tasks. The dataset is challenging due to many body occlusions among the humans and objects. It consists of 72 videos captured from 3 different angles at 30 fps, with totally 26,383 frames and an average length of 12 seconds. It involves 5 humans performing in pairs, 6 object types, 3 activities and 13 sub-activities. The dataset includes color video, depth video, human skeletons, human and object bounding boxes.

  11. 4

    WiseNET: Multi-camera dataset

    • data.4tu.nl
    • figshare.com
    zip
    Updated Sep 27, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roberto Marroquin; J. (Julien) Dubois; C. (Christophe) Nicolle (2019). WiseNET: Multi-camera dataset [Dataset]. http://doi.org/10.4121/uuid:c1fb5962-e939-4c51-bfd5-eac6f2935d44
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 27, 2019
    Dataset provided by
    4TU.Centre for Research Data
    Authors
    Roberto Marroquin; J. (Julien) Dubois; C. (Christophe) Nicolle
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The WiseNET dataset provides multi-camera multi-space video sets, along with manual and automatic people detection/tracking annotations and the complete contextual information of the environment where the network was deployed.

  12. g

    Traffic counting and multi-object-tracking metadata in non-motorised...

    • gimi9.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Traffic counting and multi-object-tracking metadata in non-motorised individual traffic [Dataset]. https://gimi9.com/dataset/eu_8995fc2f-58db-4224-a3be-1adfef02c04b/
    Explore at:
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data set contains censuses of categories ‘person’, ‘bicycle’, ‘cars’ from video data in predominantly non-motorised road traffic. The dataset contains video data that serves to visualise the record, as well as the extracted metadata of object tracking and traffic counting. The following file is included in detail: ‘video_example.mp4’ Video sequence from an inner-city area for visualisation of metadata. The underlying video is pixelated and provides the following information: — Region-of-interest (ROI): Area in which road users are counted (red polygon) — Bounding boxes of the individual road users in the categories ‘person’, ‘bicycle’, ‘car’ — Motion tracks of traffic participants ‘MOT.csv’ File contains the results of the object tracking of the video. The entries are described below: — ‘frame’: Picture of the video (1 to 7001) — ‘label_id’: IDs of object classes: — 0: person — 1: bicycle — 2: car — ‘label’: Object Classes — ‘conf’: Confidence of object class identification — ‘object_id’: Describes the tracking id, which associated objects are marked — ‘x’: Bounding Box Coordinate: x position of the lower left corner in px — ‘Y’: Bounding Box Coordinate: y-position of the lower left corner in px — ‘width’: Width of the Bounding Box in px — ‘Height’: Height of Bounding Box in px ‘roi.csv’ Contains the coordinates (pixel position) of the polygon points that mark the region-of-interest (ROI). This is a polygon with four corner points defined by the following pixel positions: — Series: [560 540 600 1000] — Column: [50 700 1200 1300] ‘count.csv’ Includes verified/corrected counting data. Counting method: — Viewing the lower left corner of the object bounding box. — Entry and exit in ROI. — Count on exit from ROI. Includes the following data: — ‘frame’: Picture of the video (1 to 7001) — ‘count_roi’: Number of road users currently in ROI — ‘count_total’: Number of road users who have left ROI at that time (result of the census)

  13. g

    Cars Object Tracking

    • gts.ai
    json
    Updated Mar 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2025). Cars Object Tracking [Dataset]. https://gts.ai/dataset-download/cars-object-tracking/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Mar 28, 2025
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    Description

    Explore the Cars Object Tracking Dataset with 10,000+ video frames for multi-object tracking and object detection. Ideal for autonomous driving and road safety systems.

  14. h

    cars-object-tracking

    • huggingface.co
    Updated Dec 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UniData (2024). cars-object-tracking [Dataset]. https://huggingface.co/datasets/UniDataPro/cars-object-tracking
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 17, 2024
    Authors
    UniData
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Cars Object Tracking

    Dataset comprises 10,000+ video frames featuring both light vehicles (cars) and heavy vehicles (minivans). This extensive collection is meticulously designed for research in multi-object tracking and object detection, providing a robust foundation for developing and evaluating various tracking algorithms for road safety system development. By utilizing this dataset, researchers can significantly enhance their understanding of vehicle dynamics and improve… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/cars-object-tracking.

  15. E

    BuckTales : A multi-UAV dataset for multi-object tracking and...

    • edmond.mpg.de
    mp4, zip
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hemal naik; Junran Yang; Dipin Das; Margaret Crofoot; Akanksha Rathore; Vivek Hari Sridhar; Hemal naik; Junran Yang; Dipin Das; Margaret Crofoot; Akanksha Rathore; Vivek Hari Sridhar (2024). BuckTales : A multi-UAV dataset for multi-object tracking and re-identification of wild antelopes [Dataset]. http://doi.org/10.17617/3.JCZ9WK
    Explore at:
    zip(65010277544), mp4(403189785), zip(3287471192), zip(457749126), mp4(130172114), zip(17011998466)Available download formats
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    Edmond
    Authors
    Hemal naik; Junran Yang; Dipin Das; Margaret Crofoot; Akanksha Rathore; Vivek Hari Sridhar; Hemal naik; Junran Yang; Dipin Das; Margaret Crofoot; Akanksha Rathore; Vivek Hari Sridhar
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The dataset contains UAV footage of wild antelopes (blackbucks) in grassland habitats. It can be mainly used for two tasks: Multi-object tracking (MOT) and Re-Identification (Re-ID). We provide annotations for the position of animals in each frame, allowing us to offer very long videos (up to 3 min) completely annotated while maintaining the identity of each animal in the video. The Re-ID dataset offers two videos, that capture the movement of some animals simultaneously from two different UAVs. The Re-ID task is to find the same individual in two videos taken simultaneously from a slightly different perspective. The relevant paper will be published in the NeurIPS 2024 Dataset and Benchmarking Track. https://nips.cc/virtual/2024/poster/97563 Resolution: 5.4 K MOT: 12 videos ( MOT17 Format) Re-ID: 6 sets (each with a pair of drones) (Custom) Detection: 320 Images (COCO, YOLO)

  16. BrackishMOT

    • kaggle.com
    Updated Feb 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Malte Pedersen (2023). BrackishMOT [Dataset]. http://doi.org/10.34740/kaggle/ds/2695511
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 21, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Malte Pedersen
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This is a MOT expansion to the Brackish Dataset which include annotations that follows the MOTChallenge standard and synthetic sequences that can be used for training. An additional nine real sequences containing the small fish class have been added, which are not part of the original Brackish Dataset.

    More information about BrackishMOT can be found in the paper BrackishMOT: The Brackish Multi-Object Tracking Dataset (accepted at SCIA 2023).

    Abstract

    There exist no publicly available annotated underwater multi-object tracking (MOT) datasets captured in turbid environments. To remedy this we propose the BrackishMOT dataset with focus on tracking schools of small fish, which is a notoriously difficult MOT task. BrackishMOT consists of 98 sequences captured in the wild. Alongside the novel dataset, we present baseline results by training a state-of-the-art tracker. Additionally, we propose a framework for creating synthetic sequences in order to expand the dataset. The framework consists of animated fish models and realistic underwater environments. We analyse the effects of including synthetic data during training and show that a combination of real and synthetic underwater training data can enhance tracking performance. Project page: https://www.vap.aau.dk/brackishmot

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3385658%2F27968b2f2b44a750aacefc2f6a10f006%2Fbrackishmot_examples.jpg?generation=1669954450267908&alt=media" alt="">

    Citation

    @InProceedings{Pedersen_2023,
    author = {Pedersen, Malte and Lehotský, Daniel and Nikolov, Ivan and Moeslund, Thomas B.},
    doi = {10.48550/ARXIV.2302.10645},
    title = {BrackishMOT: The Brackish Multi-Object Tracking Dataset},
    publisher={arXiv}, 
    year={2023}
    }
    
  17. B

    Data from: Multiple-object tracking and visually guided touch

    • borealisdata.ca
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mallory E. Terry; Lana M. Trick (2024). Multiple-object tracking and visually guided touch [Dataset]. http://doi.org/10.5683/SP2/WE9TOY
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 16, 2024
    Dataset provided by
    Borealis
    Authors
    Mallory E. Terry; Lana M. Trick
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    May 2019 - Jun 2019
    Area covered
    Guelph, Canada, Ontario
    Dataset funded by
    Natural Sciences and Engineering Research Council of Canada
    Description

    The purpose of this project was to investigate if multiple-object tracking (MOT) and visually guided touch rely on a common, limited-capacity resource. To do so, participants completed the MOT task and were required to touch items that changed colour while tracking.

  18. Data from: TimberVision: A Multi-Task Dataset and Framework for...

    • zenodo.org
    zip
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Steininger; Daniel Steininger; Julia Simon; Julia Simon; Andreas Trondl; Andreas Trondl; Markus Murschitz; Markus Murschitz (2025). TimberVision: A Multi-Task Dataset and Framework for Log-Component Segmentation and Tracking in Autonomous Forestry Operations [Dataset]. http://doi.org/10.5281/zenodo.14825846
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Steininger; Daniel Steininger; Julia Simon; Julia Simon; Andreas Trondl; Andreas Trondl; Markus Murschitz; Markus Murschitz
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description
    TimberVision is a dataset and framework for tree-trunk detection and tracking based on RGB images. It combines the advantages of oriented object detection and instance segmentation for optimizing robustness and efficiency, as described in the corresponding paper presented at WACV 2025. This repository contains images and annotations of the dataset as well as associated files. Source code, models, configuration files and further documentation can be found on our GitHub page.

    Data Structure

    The repository provides the following subdirectories:

    • images: all images included in the TimberVision dataset
    • labels: annotations corresponding to each image in https://docs.ultralytics.com/datasets/segment/" target="_blank" rel="noopener">YOLOv8 instance-segmentation format
    • labels_eval: additional annotations
      • mot: ground-truth annotations for multi-object-tracking evaluation in custom format
      • timberseg: custom annotations for selected images from the https://data.mendeley.com/datasets/y5npsm3gkj/2" target="_blank" rel="noopener">TimberSeg dataset
    • videos: complete video files used for evaluating multi-object-tracking (annotated keyframes sampled from each file are included in the images and labels directories)
    • scene_parameters.csv: annotations of four scene parameters for each image describing trunk properties and context (see the https://arxiv.org/pdf/2501.07360v1" target="_blank" rel="noopener">paper for details)
    • train/val/test.txt: original split files used for training, validation and testing of oriented-object-detection and instance-segmentation models with YOLOv8
    • sources.md: references and licenses for images used in the open-source subset

    Subsets

    TimberVision consists of multiple subsets for different application scenarios. To identify them, file names of images and annotations include the following prefixes:

    • tvc: core dataset recorded in forests and other outdoor locations
    • tvh: images depicting harvesting scenarios in forests with visible machinery
    • tvl: images depicting loading scenarios in more structured environments with visible machinery
    • tvo: a small set of third-party open-source images for evaluating generalization
    • tvt: keyframes extracted from videos at 2 fps for tracking evaluation

    Citing

    If you use the TimberVision dataset for your research, please cite the original paper:

    Steininger, D., Simon, J., Trondl, A., Murschitz, M., 2025. TimberVision: A Multi-Task Dataset and Framework for Log-Component Segmentation and Tracking in Autonomous Forestry Operations. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

  19. d

    Multi-Sensor Object Detection Data from Infrastructure Sensors Deployed at...

    • catalog.data.gov
    • data.openei.org
    Updated Mar 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). Multi-Sensor Object Detection Data from Infrastructure Sensors Deployed at Traffic Intersections in the City of Colorado Springs, Colorado, USA [Dataset]. https://catalog.data.gov/dataset/multi-sensor-object-detection-data-from-infrastructure-sensors-deployed-at-traffic-interse
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Area covered
    United States, Colorado, Colorado Springs
    Description

    The dataset provided here was collected as a part of the US Department of Transportation (USDOT) Strengthening Mobility and Revolutionizing Transportation (SMART) project, where the City of Colorado Springs (Colorado, USA) and National Renewable Energy Laboratory (NREL) collaborated to collect object-level trajectory data from road users using multiple types of infrastructure sensors deployed at different traffic intersections. The data was collected in 2024 across multiple days at various intersections in and around the City of Colorado Springs. The goal of the data collection exercises was to learn various attributes about infrastructure sensors and to build a repository of high resolution object-level data that can be used for research and development (such as for developing multi-sensor data fusion algorithms).Data presented here was collected from sensors either installed either on the traffic poles or hoisted on top of NREL’s Infrastructure Perception and Control (IPC) mobile trailer. The state-of-the-art IPC trailer can deploy the latest generation of perception sensors at traffic intersections and capture real-time road user data. Sensors used for data collection include Econolite’s EVO RADAR units, Ouster’s OS1 LIDAR units and Axis Camera units. The raw data received from individual sensors is processed at the edge compute device located inside the IPC mobile Lab, and the resulting object-level data is then stored and processed offline. Each data folder contains all the data collected on the day. We have transformed (rotation then translation) the raw detections to ensure the data from all sensors is represented in the same cartesian coordinate system. The object list attributes impacted from the transformation are PositionX, PositionY, SpeedX, SpeedY and HeadingDeg. The rest of the data attribute remains untouched. Users should note that we do not claim that this transformation is perfect and there may be some misalignment among the different sensors.

  20. R

    People Tracking And Counting Dataset

    • universe.roboflow.com
    zip
    Updated May 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MyProject (2025). People Tracking And Counting Dataset [Dataset]. https://universe.roboflow.com/myproject-cc5hp/people-tracking-and-counting/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 19, 2025
    Dataset authored and provided by
    MyProject
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    People Bounding Boxes
    Description

    People Tracking And Counting

    ## Overview
    
    People Tracking And Counting is a dataset for object detection tasks - it contains People annotations for 2,000 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dronespace (2023). Multi Object Tracking Dataset [Dataset]. https://universe.roboflow.com/dronespace/multi-object-tracking-yemuq

Multi Object Tracking Dataset

multi-object-tracking-yemuq

multi-object-tracking-dataset

Explore at:
zipAvailable download formats
Dataset updated
Nov 18, 2023
Dataset authored and provided by
Dronespace
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Cars Bus Van Truck Pedestrian Bounding Boxes
Description

Multi Object Tracking

## Overview

Multi Object Tracking is a dataset for object detection tasks - it contains Cars Bus Van Truck Pedestrian annotations for 10,004 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu