20 datasets found
  1. R

    Yolov8 Pose Dataset

    • universe.roboflow.com
    zip
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YOLO (2025). Yolov8 Pose Dataset [Dataset]. https://universe.roboflow.com/yolo-xvnzo/yolov8-pose-utovc/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    YOLO
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fall
    Description

    Yolov8 Pose

    ## Overview
    
    Yolov8 Pose is a dataset for computer vision tasks - it contains Fall annotations for 474 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. R

    Falling Pose Estimation Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Humna pose data (2025). Falling Pose Estimation Dataset [Dataset]. https://universe.roboflow.com/humna-pose-data/falling-pose-estimation/model/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    Humna pose data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Humans
    Description

    falling dataset for falling detection using yolov8 pose

  3. P

    Cow Pose Estimation Dataset Dataset

    • paperswithcode.com
    Updated Mar 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Cow Pose Estimation Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/cow-pose-estimation-dataset
    Explore at:
    Dataset updated
    Mar 5, 2025
    Description

    Description:

    👉 Download the dataset here

    This dataset has been specifically curated for cow pose estimation, designed to enhance animal behavior analysis and monitoring through computer vision techniques. The dataset is annotated with 12 keypoints on the cow’s body, enabling precise tracking of body movements and posture. It is structured in the COCO format, making it compatible with popular deep learning models like YOLOv8, OpenPose, and others designed for object detection and keypoint estimation tasks.

    Applications:

    This dataset is ideal for agricultural tech solutions, veterinary care, and animal behavior research. It can be used in various use cases such as health monitoring, activity tracking, and early disease detection in cattle. Accurate pose estimation can also assist in optimizing livestock management by understanding animal movement patterns and detecting anomalies in their gait or behavior.

    Download Dataset

    Keypoint Annotations:

    The dataset includes the following 12 keypoints, strategically marked to represent significant anatomical features of cows:

    Nose: Essential for head orientation and overall movement tracking.

    Right Eye: Helps in head pose estimation.

    Left Eye: Complements the right eye for accurate head direction.

    Neck (side): Marks the side of the neck, key for understanding head and body coordination.

    Left Front Hoof: Tracks the front left leg movement.

    Right Front Hoof: Tracks the front right leg movement.

    Left Back Hoof: Important for understanding rear leg motion.

    Right Back Hoof: Completes the leg movement tracking for both sides.

    Backbone (side): Vital for posture and overall body orientation analysis.

    Tail Root: Used for tracking tail movements and posture shifts.

    Backpose Center (near tail’s midpoint): Marks the midpoint of the back, crucial for body stability and movement analysis.

    Stomach (center of side pose): Helps in identifying body alignment and weight distribution.

    Dataset Format:

    The data is structure in the COCO format, with annotations that include image coordinates for each keypoint. This format is highly suitable for integration into popular deep learning frameworks. Additionally, the dataset includes metadata like bounding boxes, image sizes, and segmentation masks to provide detail context for each cow in an image.

    Compatibility:

    This dataset is optimize for use with cutting-edge pose estimation models such as YOLOv8 and other keypoint detection models like DeepLabCut and HRNet, enabling efficient training and inference for cow pose tracking. It can be seamlessly integrate into existing machine learning pipelines for both real-time and post-processed analysis.

    This dataset is sourced from Kaggle.

  4. f

    Hand Recognition Dataset for Machine Vision Researchers (YOLOv8 Format)

    • salford.figshare.com
    zip
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer (2025). Hand Recognition Dataset for Machine Vision Researchers (YOLOv8 Format) [Dataset]. http://doi.org/10.17866/rd.salford.24032841.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This hand recognition dataset comprises a comprehensive collection of hand images from 65 individuals, including both left and right hands, annotated with YOLOv8 formatting.The dataset encompasses 17 distinct classes, denoted as L-L1 to L-L9 for the left hand and R-R1 to R-R8 for the right hand. These classes capture various hand gestures and poses.These images were captured using a standard mobile phone camera, offering a diverse set of images with varying angles and backgrounds. In total, the dataset comprises 405 high-quality images, with 222 representing left hands and 183 representing right hands. The left hand classes are distributed as follows: L-L1 (62 images), L-L2 (56 images), L-L3 (44 images), L-L4 (29 images), L-L5 (14 images), L-L6 (8 images), L-L7 (4 images), L-L8 (2 images), and L-L9 (3 images). Similarly, the right hand classes are distributed as R-R1 (53 images), R-R2 (48 images), R-R3 (38 images), R-R4 (24 images), R-R5 (14 images), R-R6 (4 images), R-R7 (1 image), and R-R8 (1 image).We welcome the machine vision research community to utilise and build upon this dataset to advance the field of hand recognition and its applications.

  5. R

    Mammoset Segmentation And Pose Dataset

    • universe.roboflow.com
    zip
    Updated Sep 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SupsinK (2024). Mammoset Segmentation And Pose Dataset [Dataset]. https://universe.roboflow.com/supsink/mammoset-segmentation-and-pose
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 5, 2024
    Dataset authored and provided by
    SupsinK
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Marmoset Polygons
    Description

    Mammoset Segmentation And Pose

    ## Overview
    
    Mammoset Segmentation And Pose is a dataset for instance segmentation tasks - it contains Marmoset annotations for 5,315 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. f

    Ap values of different methods on the three datasets. AP values include four...

    • figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xunqian Xu; Tao Wu; Zhongbao Du; Hui Rong; Siwen Wang; Shue Li; Dakai Chen (2025). Ap values of different methods on the three datasets. AP values include four indicators: AP@50, AP@75, AP@M, and AP@L. [Dataset]. http://doi.org/10.1371/journal.pone.0318578.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Xunqian Xu; Tao Wu; Zhongbao Du; Hui Rong; Siwen Wang; Shue Li; Dakai Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Ap values of different methods on the three datasets. AP values include four indicators: AP@50, AP@75, AP@M, and AP@L.

  7. R

    Worm Pose Estimation Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated May 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Universidad de Jaen (2023). Worm Pose Estimation Segmentation Dataset [Dataset]. https://universe.roboflow.com/universidad-de-jaen/worm-pose-estimation-segmentation/dataset/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 27, 2023
    Dataset authored and provided by
    Universidad de Jaen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Worms Polygons
    Description

    Worm Pose Estimation Segmentation

    ## Overview
    
    Worm Pose Estimation Segmentation is a dataset for instance segmentation tasks - it contains Worms annotations for 2,040 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  8. R

    Shirt Pose Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated Jun 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    raghavendra (2024). Shirt Pose Segmentation Dataset [Dataset]. https://universe.roboflow.com/raghavendra-drog3/shirt-pose-segmentation
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 27, 2024
    Dataset authored and provided by
    raghavendra
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Clothe Polygons
    Description

    Shirt Pose Segmentation

    ## Overview
    
    Shirt Pose Segmentation is a dataset for instance segmentation tasks - it contains Clothe annotations for 368 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  9. f

    FPS values of different methods on the three datasets.

    • plos.figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xunqian Xu; Tao Wu; Zhongbao Du; Hui Rong; Siwen Wang; Shue Li; Dakai Chen (2025). FPS values of different methods on the three datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0318578.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Xunqian Xu; Tao Wu; Zhongbao Du; Hui Rong; Siwen Wang; Shue Li; Dakai Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    FPS values of different methods on the three datasets.

  10. f

    AP values of three methods on the three datasets, including four indicators:...

    • plos.figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xunqian Xu; Tao Wu; Zhongbao Du; Hui Rong; Siwen Wang; Shue Li; Dakai Chen (2025). AP values of three methods on the three datasets, including four indicators: AP @50, AP @75, AP @M, and AP @L. Experiment one is the baseline model without adding other modules, experiment two is the baseline model with the LKA module added, and experiment three is the baseline model with the SimDLKA module added. [Dataset]. http://doi.org/10.1371/journal.pone.0318578.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Xunqian Xu; Tao Wu; Zhongbao Du; Hui Rong; Siwen Wang; Shue Li; Dakai Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    AP values of three methods on the three datasets, including four indicators: AP @50, AP @75, AP @M, and AP @L. Experiment one is the baseline model without adding other modules, experiment two is the baseline model with the LKA module added, and experiment three is the baseline model with the SimDLKA module added.

  11. R

    Standing_pose Dataset

    • universe.roboflow.com
    zip
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    StandPose (2025). Standing_pose Dataset [Dataset]. https://universe.roboflow.com/standpose-6upkf/standing_pose/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    StandPose
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Sitting Pose Polygons
    Description

    Standing_Pose

    ## Overview
    
    Standing_Pose is a dataset for instance segmentation tasks - it contains Sitting Pose annotations for 391 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. The Insect Hotel Dataset: A photorealistic synthetic dataset for pose...

    • zenodo.org
    application/gzip, tar
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin GĂĽnther; Martin GĂĽnther; Lennart Niecksch; Lennart Niecksch (2025). The Insect Hotel Dataset: A photorealistic synthetic dataset for pose estimation and panoptic segmentation [Dataset]. http://doi.org/10.5281/zenodo.15190123
    Explore at:
    tar, application/gzipAvailable download formats
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Martin GĂĽnther; Martin GĂĽnther; Lennart Niecksch; Lennart Niecksch
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Insect Hotel Dataset is a photorealistic synthetic dataset designed for pose estimation and panoptic segmentation tasks. It contains 20,000 synthetically generated photorealistic images of objects used in a human-robot collaborative assembly scenario. The dataset was created using NViSII. It also includes the 3D object meshes and YOLOv8 model weights.

    This dataset accompanies the following upcoming publication:

    Juan Carlos Saborío, Marc Vinci, Oscar Lima, Sebastian Stock, Lennart Niecksch, Martin Günther, Joachim Hertzberg, and Martin Atzmüller (2025): “Uncertainty-Resilient Active Intention Recognition for Robotic Assistants”. (submitted)

    File Structure

    To facilitate easier downloading, the dataset has been split into 10 parts. Each part is further divided into three archives:

    • RGB images + JSON annotations

    • Depth images (optional)

    • Instance segmentation images (optional)

    To use the complete dataset, download all 30 archives and extract them into the same root folder, so that the depth and segmentation images are located alongside the corresponding RGB and JSON files.

    The dataset format (coordinate systems, conventions, and JSON fields) follows the structure documented here.

    Contents of the archives:

    .
    ├── insect_hotel_20k_00.tgz # RGB images + annotation JSON files
    │ └── 00 # archive index (00...09)
    │ ├── 0000 # scene index (0000...0099), each with 20 images in front of the same background
    │ │ ├── 00000.jpg # RGB image
    │ │ ├── 00000.json # pose, bounding boxes, etc.
    │ │ ├── [...]
    │ │ ├── 00019.jpg
    │ │ ├── 00019.json
    │ │ ├── _camera_settings.json # camera intrinsics
    │ │ └── _object_settings.json # object metadata
    │ ├── [...]
    │ └── 0099
    ├── insect_hotel_20k_00.depth.tgz # Depth images (.exr)
    │ └── 00
    │ └── 0000
    │ ├── 00000.depth.exr
    │ └── [...]
    ├── insect_hotel_20k_00.seg.tgz # Instance segmentation images (.exr)
    │ └── 00
    │ └── 0000
    │ ├── 00000.seg.exr
    │ └── [...]
    └── insect_hotel_20k_01.tgz
    └── 01
    └── 0000
    ├── 00000.jpg
    ├── 00000.json
    └── [...]

    3D Meshes

    The file meshes.tgz contains all object meshes used for training.

    Insect hotel parts (used in the assembly task)

    • bright_green_part

    • dark_green_part

    • magenta_part

    • purple_part

    • red_part

    • yellow_part

    Other objects

    • klt — “Kleinladungsträger” (small load carrier / blue box)

    • multimeter

    • power_drill_with_grip

    • relay

    • screwdriver

    Additionally, the images include various distractor objects from the Google Scanned Objects (GSO) dataset. The corresponding meshes are not included here but can be obtained directly from the GSO dataset.

    YOLOv8 Model

    The file yolov8_weights.tgz contains a YOLOv8 model that was trained on a subset of the object classes. The class index mapping is as follows:

    0: bright_green_part
    1: dark_green_part
    2: magenta_part
    3: purple_part
    4: red_part
    5: yellow_part
    6: klt

    Helper utilities for converting the DOPE format to YOLO format, along with scripts for training, inference, and visualization, are available via:

    git clone -b insect_hotel https://github.com/DFKI-NI/yolo8_keypoint_utils.git

  13. R

    Abb_pose_estimation_dataset Dataset

    • universe.roboflow.com
    zip
    Updated May 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ABBRock Bolt Detection (2025). Abb_pose_estimation_dataset Dataset [Dataset]. https://universe.roboflow.com/abbrock-bolt-detection/abb_pose_estimation_dataset/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 5, 2025
    Dataset authored and provided by
    ABBRock Bolt Detection
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Group2 Polygons
    Description

    ABB_Pose_Estimation_Dataset

    ## Overview
    
    ABB_Pose_Estimation_Dataset is a dataset for instance segmentation tasks - it contains Group2 annotations for 252 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  14. Data from: Health&Gait: a dataset for gait-based analysis

    • zenodo.org
    • produccioncientifica.uca.es
    bin, zip
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge Zafra-Palma; Jorge Zafra-Palma; Nuria Marín-Jiménez; Nuria Marín-Jiménez; José Castro-Piñero; José Castro-Piñero; Magdalena Cuenca-García; Magdalena Cuenca-García; Rafael Muñoz-Salinas; Rafael Muñoz-Salinas; Manuel J. Marin-Jimenez; Manuel J. Marin-Jimenez (2025). Health&Gait: a dataset for gait-based analysis [Dataset]. http://doi.org/10.5281/zenodo.14039922
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jorge Zafra-Palma; Jorge Zafra-Palma; Nuria Marín-Jiménez; Nuria Marín-Jiménez; José Castro-Piñero; José Castro-Piñero; Magdalena Cuenca-García; Magdalena Cuenca-García; Rafael Muñoz-Salinas; Rafael Muñoz-Salinas; Manuel J. Marin-Jimenez; Manuel J. Marin-Jimenez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the Health&Gait dataset, the first that enables gait analysis using visual information without specific sensors, relying solely on cameras. The dataset includes multimodal features extracted from videos, and gait parameters and anthropometric measurements from each participant. This dataset is intended for use in health, sports and gait analysis research.

    Health&Gait consists of 1,564 videos of 398 participants walking in a controlled closed environment, where each video has associated the following information:

    • 2D pose estimation of their joints by AlphaPose (JSON format files).
    • Semantic segmentation by DensePose (PNG images).
    • Optical flow by TVL1 and GMFlow (PNG images).
    • Silhouette by YOLOV8 (JPEG images).

    Moreover, for each subject, the following data has been recorded:

    • Anthropometric measurements.
    • Gait parameters obtained from OptoGait and MuscleLAB.
    • Gait parameters estimated from pose information.
  15. R

    Humanintentpose Dataset

    • universe.roboflow.com
    zip
    Updated Dec 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HRI (2024). Humanintentpose Dataset [Dataset]. https://universe.roboflow.com/hri-raegq/humanintentpose-y5vrv
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 11, 2024
    Dataset authored and provided by
    HRI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Intent Polygons
    Description

    Humanintentpose

    ## Overview
    
    Humanintentpose is a dataset for instance segmentation tasks - it contains Intent annotations for 2,137 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  16. R

    Duplo 3.0 Dataset

    • universe.roboflow.com
    zip
    Updated Sep 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pose detection Duplo (2024). Duplo 3.0 Dataset [Dataset]. https://universe.roboflow.com/pose-detection-duplo/duplo-3.0/dataset/12
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 20, 2024
    Dataset authored and provided by
    Pose detection Duplo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Bricks Ss1h Polygons
    Description

    Duplo 3.0

    ## Overview
    
    Duplo 3.0 is a dataset for instance segmentation tasks - it contains Bricks Ss1h annotations for 556 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  17. R

    Humanposition Dataset

    • universe.roboflow.com
    zip
    Updated Apr 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    udg (2023). Humanposition Dataset [Dataset]. https://universe.roboflow.com/udg-6k8v0/humanposition
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 19, 2023
    Dataset authored and provided by
    udg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    People Pose Polygons
    Description

    HumanPosition

    ## Overview
    
    HumanPosition is a dataset for instance segmentation tasks - it contains People Pose annotations for 465 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  18. R

    Stick Detection Dataset

    • universe.roboflow.com
    zip
    Updated Jun 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agrowizard (2025). Stick Detection Dataset [Dataset]. https://universe.roboflow.com/agrowizard/stick-detection/dataset/22
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 17, 2025
    Dataset authored and provided by
    Agrowizard
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Sticks Polygons
    Description

    Here are a few use cases for this project:

    1. Forest Cleanup and Maintenance: This model could be used by park services or forest maintenance crews to identify areas with abundant fallen sticks, enabling the cleanup process and reducing the risk of forest fires.

    2. Robotics & Automation: In a scenario where robots are efficient in picking up smaller objects, robots can be programmed to recognize sticks and gather them in a certain place for disposal or resource utilization, be it in commercial, outdoor, or home environments.

    3. Construction Safety: Construction companies could use this model to identify sticks and other similar objects on construction sites that may pose a safety risk, creating a more secure working environment and preventing possible accidents.

    4. Outdoor Games Tool: Some outdoor games or sports (like fetch with dogs or outdoor survival challenges) might require finding sticks. This model can be used in apps to aid players to locate sticks more efficiently.

    5. Outdoor Wildlife Research: Researchers studying certain species might benefit from identifying areas with more stick/twig availability, as this may influence the habitation patterns of certain animals or insects.

  19. R

    Crak Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated Oct 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    instance segmentation (2023). Crak Segmentation Dataset [Dataset]. https://universe.roboflow.com/instance-segmentation-fxbem/crak-segmentation/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 8, 2023
    Dataset authored and provided by
    instance segmentation
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Crack Polygons
    Description

    Here are a few use cases for this project:

    1. Infrastructure Maintenance: The model can be used by government agencies or private companies to assess the condition of roads, bridges, and buildings in real-time. Regular scans can help detect emerging cracks, and consequently, worrisome structural issues in their early stages - leading to preventive maintenance.

    2. Construction Quality Assurance: Construction firms can use the model to check and ensure the integrity of their work. The model can be used to inspect walls, floors, and other structures for cracks that indicate possible construction faults.

    3. Safety Inspections: The model can be useful for companies dealing with safety inspections, such as fire departments or safety regulators, to identify cracks in various types of infrastructure like pipelines, chemical plants, or nuclear facilities that may pose accident risks.

    4. Geological Study: Geological or seismological researchers can use this model to identify and categorize cracks in geological structures for analysis, potentially aiding in predicting earthquakes or land shifts.

    5. Art Restoration: Museums or art restoration firms can use the model to detect and monitor cracks in artwork over time, aiding in the preservation and restoration process.

  20. f

    The performance of S-YOFEO model on MOT17.

    • plos.figshare.com
    xls
    Updated Jun 4, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenshun Sheng; Jiahui Shen; Qi Chen; Qiming Huang (2025). The performance of S-YOFEO model on MOT17. [Dataset]. http://doi.org/10.1371/journal.pone.0322919.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Wenshun Sheng; Jiahui Shen; Qi Chen; Qiming Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A real-time stable multi-target tracking method based on the enhanced You Only Look Once-v8 (YOLOv8) and the optimized Simple Online and Realtime Tracking with a Deep association metric (DeepSORT) for real-time stable multi-target tracking (S-YOFEO) is proposed to address the issue of target ID transformation and loss caused by the increase of practical background complexity. The complexity of the real-world context poses a great challenge to multi-target tracking systems. Changes due to weather or lighting conditions, as well as the presence of numerous visually similar objects, can lead to target ID switching and tracking loss, thus affecting the system’s reliability. In addition, the unpredictability of pedestrian movement increases the difficulty of maintaining consistent and accurate tracking. For the purpose of further enhancing the processing capability of small-scale features, a small target detection head is first introduced to the detection layer of YOLOv8 in this paper with the aim of collecting more detailed information by increasing the detection resolution of YOLOv8 to ensure precise and fast detection. Secondly, the Omni-Scale Network (OSNet) feature extraction network is implemented to enable accurate and efficient fusion of the extracted complex and comparable feature information, taking into account the restricted computational power of DeepSORT’s original feature extraction network. Again, addressing the limitations of traditional Kalman filtering in nonlinear motion trajectory prediction, a novel adaptive forgetting Kalman filter algorithm (FSA) is devised to enhance the precision of model prediction and the effectiveness of parameter updates to adjust to the uncertain movement speed and trajectory of pedestrians in real scenarios. Following that, an accurate and stable association matching process is obtained by substituting Efficient-Intersection over Union (EIOU) for Complete-Intersection over Union (CIOU) in DeepSORT to boost the convergence speed and matching effect during association matching. Last but not least, One-Shot Aggregation (OSA) is presented as the trajectory feature extractor to deal with the various noise interferences in complex scenes. OSA is highly sensitive to information of different scales, and its one-time aggregation property substantially decreases the computational overhead of the model. According to the trial results, S-YOFEO has made some developments as its precision can reach 78.2% and its speed can reach 56.0 frames per second (FPS), which fully meets the demand for efficient and accurate tracking in actual complex traffic environments. Through this significant increase in performance, S-YOFEO can contribute to the development of more reliable and efficient tracking systems, which will have a profound impact on a wide range of industries and promote intelligent transformation and upgrading.

  21. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
YOLO (2025). Yolov8 Pose Dataset [Dataset]. https://universe.roboflow.com/yolo-xvnzo/yolov8-pose-utovc/model/2

Yolov8 Pose Dataset

yolov8-pose-utovc

yolov8-pose-dataset

Explore at:
497 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Jul 3, 2025
Dataset authored and provided by
YOLO
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Fall
Description

Yolov8 Pose

## Overview

Yolov8 Pose is a dataset for computer vision tasks - it contains Fall annotations for 474 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu