14 datasets found
  1. P

    MOTChallenge Dataset

    • paperswithcode.com
    • library.toponeai.link
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Leal-Taixé; Anton Milan; Ian Reid; Stefan Roth; Konrad Schindler, MOTChallenge Dataset [Dataset]. https://paperswithcode.com/dataset/motchallenge
    Explore at:
    Authors
    Laura Leal-Taixé; Anton Milan; Ian Reid; Stefan Roth; Konrad Schindler
    Description

    The MOTChallenge datasets are designed for the task of multiple object tracking. There are several variants of the dataset released each year, such as MOT15, MOT17, MOT20.

  2. i

    Crowd tracking data for group tracking query

    • ieee-dataport.org
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yon Dohn Chung (2024). Crowd tracking data for group tracking query [Dataset]. https://ieee-dataport.org/documents/crowd-tracking-data-group-tracking-query
    Explore at:
    Dataset updated
    Jul 8, 2024
    Authors
    Yon Dohn Chung
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is object tracking data for MOT challenge datasets.The inferenced data is generated by Yolov5 and DeepSort.

  3. MOT2D 2015

    • kaggle.com
    zip
    Updated Jul 12, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K Scott Mader (2018). MOT2D 2015 [Dataset]. https://www.kaggle.com/datasets/kmader/mot2d-2015/discussion
    Explore at:
    zip(2629121534 bytes)Available download formats
    Dataset updated
    Jul 12, 2018
    Authors
    K Scott Mader
    Description

    Content

    The dataset is about tracking objects in 2D in movies with fixed and moving cameras. Most of the objects are pedestrians but there are a few other examples

    I just downloaded the zip and am now looking at what is actually inside. A kernel will hopefully clarify how the ground truth can be read.

    Acknowledgements

    The dataset was originally download from the MOT challenge site at https://motchallenge.net/data/2D_MOT_2015/#download

    Inspiration

    Your data will be in front of the world's largest data science community. What questions do you want to see answered?

  4. P

    MOT20 Dataset

    • paperswithcode.com
    Updated Feb 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). MOT20 Dataset [Dataset]. https://paperswithcode.com/dataset/mot20
    Explore at:
    Dataset updated
    Feb 2, 2021
    Description

    MOT20 is a dataset for multiple object tracking. The dataset contains 8 challenging video sequences (4 train, 4 test) in unconstrained environments, from crowded places such as train stations, town squares and a sports stadium.

  5. G

    The Growing Strawberries Dataset

    • data.4tu.nl
    zip
    Updated Feb 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junhan Wen; Camiel Verschoor; Thomas Abeel; M.M. (Mathijs) de Weerdt (2024). The Growing Strawberries Dataset [Dataset]. http://doi.org/10.4121/e3b31ece-cc88-4638-be10-8ccdd4c5f2f7.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 9, 2024
    Dataset provided by
    4TU.ResearchData
    Authors
    Junhan Wen; Camiel Verschoor; Thomas Abeel; M.M. (Mathijs) de Weerdt
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Time period covered
    Apr 22, 2021 - Oct 4, 2023
    Area covered
    Bleiswijk (2021) and Horst (2022), The Netherlands
    Description

    The Growing Strawberries Dataset (GSD) is a curated multiple-object tracking dataset inspired by the growth monitoring of strawberries. The frames were taken at hourly intervals by six cameras for in total of 16 months in 2021 and 2022, covering 12 plants in two greenhouses respectively. The dataset consists of hourly images collected during the cultivation period, bounding box (bbox) annotations of strawberry fruits, and precise identification and tracking of strawberries over time. GSD contains two types of images - RGB (visual spectrum) and OCN (orange, cyan, near-infrared). These images were captured throughout the cultivation period. Each image sequence represents all the images captured by one camera during the year of cultivation. These sequences are named using the format "

  6. P

    SOMPT22 Dataset

    • paperswithcode.com
    Updated Aug 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fatih Emre Simsek; Cevahir Cigla; Koray Kayabol (2022). SOMPT22 Dataset [Dataset]. https://paperswithcode.com/dataset/sompt22
    Explore at:
    Dataset updated
    Aug 3, 2022
    Authors
    Fatih Emre Simsek; Cevahir Cigla; Koray Kayabol
    Description

    SOMPT22 is a multi-object tracking (MOT) benchmark focused on surveillance-style pedestrian tracking.

    22 long video sequences (static pole-mounted cameras, 6 – 8 m height)
    ~51 k annotated frames with bounding boxes + unique track IDs
    Outdoor scenes with illumination changes, partial occlusions and appearance similarity
    Single class: person
    Split files ready for training/validation and standard MOT evaluation tools

    SOMPT22 aims to complement generic MOTChallenge-style datasets by stressing long-term ID maintenance under sparse-to-medium crowd density instead of dense, short clips.

    Homepage → https://sompt22.github.io Download → Google Drive link in the homepage Citation → ```bibtex @misc{simsek2022sompt22, author = {Simsek, Fatih Emre and Cigla, Cevahir and Kayabol, Koray}, title = {SOMPT22: A Surveillance Oriented Multi-Pedestrian Tracking Dataset}, year = {2022}, eprint = {2208.02580}, archivePrefix = {arXiv}, primaryClass = {cs.CV} }

  7. Data from: Strong Baseline: Multi-UAV Tracking via YOLOv12 with...

    • zenodo.org
    zip
    Updated Apr 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yu-Hsi Chen; Yu-Hsi Chen (2025). Strong Baseline: Multi-UAV Tracking via YOLOv12 with BoT-SORT-ReID [Dataset]. http://doi.org/10.5281/zenodo.15203123
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yu-Hsi Chen; Yu-Hsi Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    This repository contains the organized datasets referenced in Table 2 of the paper, including the following five datasets:

    • (1) Single Object Tracking (SOT)
      • (1-1) Train/Validation/Test
      • (1-2) Train/Validation
    • (2) Multi-Object Tracking (MOT)
      • (2-1) Train/Validation
    • (3) Re-Identification (ReID)
      • (3-1) Full Bounding Box Train/Validation
      • (3-2) 1/10 Bounding Box Train/Validation

    All datasets are derived from the https://zenodo.org/records/15103888" target="_blank" rel="noopener">official release of the 4th Anti-UAV Challenge, featuring thermal infrared videos.

  8. P

    QuadTrack Dataset

    • paperswithcode.com
    Updated Mar 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kai Luo; Hao Shi; Sheng Wu; Fei Teng; Mengfei Duan; Chang Huang; Yuhang Wang; Kaiwei Wang; Kailun Yang (2025). QuadTrack Dataset [Dataset]. https://paperswithcode.com/dataset/quadtrack
    Explore at:
    Dataset updated
    Mar 5, 2025
    Authors
    Kai Luo; Hao Shi; Sheng Wu; Fei Teng; Mengfei Duan; Chang Huang; Yuhang Wang; Kaiwei Wang; Kailun Yang
    Description

    Most existing MOT datasets are captured using pinhole cameras, which are characterized by a narrow-FoV and linear sensor motion. However, when panoramic-FoV capture devices experience even slight movements, the entire scene can change drastically, posing significant challenges for object tracking. QuadTrack addresses this challenge by providing a benchmark specifically designed to test MOT algorithms under dynamic, non-linear motion conditions. It enables evaluating algorithm robustness in tracking objects with panoramic, non-uniform motion.

  9. f

    Data from: S1 Dataset -

    • plos.figshare.com
    7z
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wei Luo; Guoqing Zhang; Quanbo Yuan; Yongxiang Zhao; Hongce Chen; Jingjie Zhou; Zhaopeng Meng; Fulong Wang; Lin Li; Jiandong Liu; Guanwu Wang; Penggang Wang; Zhongde Yu (2024). S1 Dataset - [Dataset]. http://doi.org/10.1371/journal.pone.0302277.s003
    Explore at:
    7zAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Wei Luo; Guoqing Zhang; Quanbo Yuan; Yongxiang Zhao; Hongce Chen; Jingjie Zhou; Zhaopeng Meng; Fulong Wang; Lin Li; Jiandong Liu; Guanwu Wang; Penggang Wang; Zhongde Yu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Enhanced animal welfare has emerged as a pivotal element in contemporary precision animal husbandry, with bovine monitoring constituting a significant facet of precision agriculture. The evolution of intelligent agriculture in recent years has significantly facilitated the integration of drone flight monitoring tools and innovative systems, leveraging deep learning to interpret bovine behavior. Smart drones, outfitted with monitoring systems, have evolved into viable solutions for wildlife protection and monitoring as well as animal husbandry. Nevertheless, challenges arise under actual and multifaceted ranch conditions, where scale alterations, unpredictable movements, and occlusions invariably influence the accurate tracking of unmanned aerial vehicles (UAVs). To address these challenges, this manuscript proposes a tracking algorithm based on deep learning, adhering to the Joint Detection Tracking (JDT) paradigm established by the CenterTrack algorithm. This algorithm is designed to satisfy the requirements of multi-objective tracking in intricate practical scenarios. In comparison with several preeminent tracking algorithms, the proposed Multi-Object Tracking (MOT) algorithm demonstrates superior performance in Multiple Object Tracking Accuracy (MOTA), Multiple Object Tracking Precision (MOTP), and IDF1. Additionally, it exhibits enhanced efficiency in managing Identity Switches (ID), False Positives (FP), and False Negatives (FN). This algorithm proficiently mitigates the inherent challenges of MOT in complex, livestock-dense scenarios.

  10. P

    VETRA Dataset

    • paperswithcode.com
    Updated Sep 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jens Hellekes; Manuel Mühlhaus; Reza Bahmanyar; Seyed Majid Azimi; Franz Kurz (2024). VETRA Dataset [Dataset]. https://paperswithcode.com/dataset/vetra
    Explore at:
    Dataset updated
    Sep 28, 2024
    Authors
    Jens Hellekes; Manuel Mühlhaus; Reza Bahmanyar; Seyed Majid Azimi; Franz Kurz
    Description

    VETRA is a dataset for vehicle tracking in aerial image sequences and presents unique challenges such as low frame rates, small and fast-moving objects, as well as high camera movement. These characteristics allow for extended tracking of numerous vehicles with varying motion behaviors over large areas and pose new challenges for MOT algorithms. VETRA consists of 52 image sequences captured by airplanes and helicopters using DLR’s 3k and 4k camera systems. The acquisition sites are located in Germany and Austria. In addition to the classical training, validation and test sets, VETRA offers a second test set specifically designed for the application of large area monitoring (LAM). The LAM sequences are recorded over 7 rural roads and motorways with a fixed camera speed and configuration. Each road section is captured at 4 different times of the day, enabling the performance of MOT algorithms to be evaluated under different traffic loads in a static environment. Furthermore, the features extracted from the LAM sequences can be utilized in transport research applications.

  11. M

    Modular Operating Theatre (MOT) Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated May 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Modular Operating Theatre (MOT) Report [Dataset]. https://www.datainsightsmarket.com/reports/modular-operating-theatre-mot-965943
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    May 3, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global Modular Operating Theatre (MOT) market is experiencing robust growth, driven by increasing demand for advanced healthcare infrastructure, a surge in surgical procedures, and the need for efficient and flexible healthcare facilities. The market, estimated at $2.5 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 7% from 2025 to 2033, reaching an estimated market value of approximately $4.5 billion by 2033. Key factors propelling this growth include the rising prevalence of chronic diseases necessitating more surgeries, the increasing adoption of minimally invasive surgical techniques requiring specialized operating rooms, and the benefits of modular construction, such as faster deployment, cost-effectiveness, and adaptability to future needs. The segment comprising large hospitals accounts for a significant market share due to their higher capacity and investment capabilities. Stainless steel wall panels dominate the types segment due to their durability, ease of cleaning and sterilization, and overall hygiene benefits. Leading market players are continuously innovating and expanding their product portfolios, contributing to market expansion through technological advancements and strategic partnerships. The Asia-Pacific region, particularly India and China, is projected to witness significant growth due to burgeoning healthcare infrastructure development and rising disposable incomes. While the market presents significant opportunities, challenges remain. High initial investment costs for MOTs could hinder market penetration, especially in resource-constrained settings. Furthermore, regulatory hurdles and stringent safety standards in various regions can pose obstacles to market expansion. However, the long-term cost-effectiveness, reduced construction time, and enhanced operational efficiency offered by MOTs are expected to outweigh these challenges, leading to sustained growth. The competitive landscape is characterized by a mix of established players and emerging companies, fostering innovation and competitive pricing. Future growth will likely depend on the development of more technologically advanced MOTs with features such as integrated imaging systems, advanced ventilation, and enhanced infection control measures. The continued focus on improving patient safety and optimizing surgical workflows will also significantly influence market trends.

  12. P

    RailEye3D Dataset Dataset

    • paperswithcode.com
    Updated Feb 25, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marco Wallner; Daniel Steininger; Verena Widhalm; Matthias Schörghuber; Csaba Beleznai, RailEye3D Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/raileye3d-dataset
    Explore at:
    Dataset updated
    Feb 25, 2021
    Authors
    Marco Wallner; Daniel Steininger; Verena Widhalm; Matthias Schörghuber; Csaba Beleznai
    Description

    The RailEye3D dataset, a collection of train-platform scenarios for applications targeting passenger safety and automation of train dispatching, consists of 10 image sequences captured at 6 railway stations in Austria. Annotations for multi-object tracking are provided in both an unified format as well as the ground-truth format used in the MOTChallenge.

  13. P

    TrajNet Dataset

    • paperswithcode.com
    Updated Aug 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Becker; Ronny Hug; Wolfgang Hübner; Michael Arens (2021). TrajNet Dataset [Dataset]. https://paperswithcode.com/dataset/trajnet-1
    Explore at:
    Dataset updated
    Aug 23, 2021
    Authors
    Stefan Becker; Ronny Hug; Wolfgang Hübner; Michael Arens
    Description

    The TrajNet Challenge represents a large multi-scenario forecasting benchmark. The challenge consists on predicting 3161 human trajectories, observing for each trajectory 8 consecutive ground-truth values (3.2 seconds) i.e., t−7,t−6,…,t, in world plane coordinates (the so-called world plane Human-Human protocol) and forecasting the following 12 (4.8 seconds), i.e., t+1,…,t+12. The 8-12-value protocol is consistent with the most trajectory forecasting approaches, usually focused on the 5-dataset ETH-univ + ETH-hotel + UCY-zara01 + UCY-zara02 + UCY-univ. Trajnet extends substantially the 5-dataset scenario by diversifying the training data, thus stressing the flexibility and generalization one approach has to exhibit when it comes to unseen scenery/situations. In fact, TrajNet is a superset of diverse datasets that requires to train on four families of trajectories, namely 1) BIWI Hotel (orthogonal bird’s eye flight view, moving people), 2) Crowds UCY (3 datasets, tilted bird’s eye view, camera mounted on building or utility poles, moving people), 3) MOT PETS (multisensor, different human activities) and 4) Stanford Drone Dataset (8 scenes, high orthogonal bird’s eye flight view, different agents as people, cars etc. ), for a total of 11448 trajectories. Testing is requested on diverse partitions of BIWI Hotel, Crowds UCY, Stanford Drone Dataset, and is evaluated by a specific server (ground-truth testing data is unavailable for applicants).

  14. P

    VisDrone Dataset

    • paperswithcode.com
    Updated Apr 6, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pengfei Zhu; Longyin Wen; Xiao Bian; Haibin Ling; QinGhua Hu (2022). VisDrone Dataset [Dataset]. https://paperswithcode.com/dataset/visdrone
    Explore at:
    Dataset updated
    Apr 6, 2022
    Authors
    Pengfei Zhu; Longyin Wen; Xiao Bian; Haibin Ling; QinGhua Hu
    Description

    VisDrone is a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, to make vision meet drones. The VisDrone2019 dataset is collected by the AISKYEYE team at Lab of Machine Learning and Data Mining, Tianjin University, China. The benchmark dataset consists of 288 video clips formed by 261,908 frames and 10,209 static images, captured by various drone-mounted cameras, covering a wide range of aspects including location (taken from 14 different cities separated by thousands of kilometers in China), environment (urban and country), objects (pedestrian, vehicles, bicycles, etc.), and density (sparse and crowded scenes). Note that, the dataset was collected using various drone platforms (i.e., drones with different models), in different scenarios, and under various weather and lighting conditions. These frames are manually annotated with more than 2.6 million bounding boxes of targets of frequent interests, such as pedestrians, cars, bicycles, and tricycles. Some important attributes including scene visibility, object class and occlusion, are also provided for better data utilization.

  15. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Laura Leal-Taixé; Anton Milan; Ian Reid; Stefan Roth; Konrad Schindler, MOTChallenge Dataset [Dataset]. https://paperswithcode.com/dataset/motchallenge

MOTChallenge Dataset

Explore at:
Authors
Laura Leal-Taixé; Anton Milan; Ian Reid; Stefan Roth; Konrad Schindler
Description

The MOTChallenge datasets are designed for the task of multiple object tracking. There are several variants of the dataset released each year, such as MOT15, MOT17, MOT20.

Search
Clear search
Close search
Google apps
Main menu