Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Oriented Bounding Box is a dataset for instance segmentation tasks - it contains Crops Path SYBD annotations for 1,518 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The crosswalk polygons can be utilized for safety, mobility, and other analyses. This model builds upon YOLOv8 and incorporates oriented bounding boxes (OBB), enhancing detection accuracy by precisely marking crosswalks regardless of their orientations. Various strategies are adopted to enhance the baseline YOLOv8 model, including Convolutional Block Attention, a dual-branch Spatial Pyramid Pooling-Fast module, and cosine annealing. We have also developed a dataset comprising over 23,000 annotated instances of crosswalks to train and validate the proposed model and its variations. The best-performing model achieves a precision of 96.5% and a recall of 93.3% on data collected in Massachusetts, demonstrating its accuracy and efficiency.From the MassGIS website, we downloaded images for 2019 and 2021. The image dataset for each year comprises over 10,000 high-resolution images (tiles). Each image has 100 million pixels (10,000 x 10,000 pixels), and each pixel represents about 6 inches (15 centimeters) on the ground. This resolution provides sufficient detail for identifying small-sized features such as pedestrian crosswalks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Millions of river barriers have been constructed worldwide for flood control, hydropower generation, and agricultural irrigation. The lack of comprehensive records on the locations and types of river barriers, particularly small barriers such as weirs, limits our ability to assess their societal and environmental impacts. Integrating satellite imagery with object detection algorithms holds promise for the automatic identification of river barriers on a global scale. However, achieving this objective requires high-quality image datasets for algorithm training and testing. Hence, this study presents a large-scale dataset named the River Barrier Object Detection (RBOD), making it the first publicly available dataset specifically for river barrier object detection.
The RBOD dataset comprises 4,872 high-resolution satellite images and 11,741 meticulously annotated oriented bounding boxes (OBBs). In this dataset, river barriers can be classified into five classes: dams, groynes, locks, sluices, and weirs. The effectiveness of the RBOD dataset was validated using five typical object detection algorithms, namely YOLOV8-OBB, Oriented R-CNN, Rotated Faster R-CNN, R3Det, and Rotated RetinaNet, which provide performance benchmarks for future applications.
The RBOD dataset consists of three folders (namely, 'images', 'labels_voc', and 'labels_yolo') and a .txt file named 'class':
·'images' folder - contains 4872 satellite images (.jpg).
·'labels_voc' folder - contains 11,741 .xml files for annotations in PASCAL VOC format. In these .xml files, the position of OBB is represented as (cx, cy, width, height, angle), where 'cx' and 'cy' denote the center coordinates, 'width' and 'height' are the lengths along the x- and y-axes, and 'angle' is the clockwise rotation angle relative to the x-axis.
·'labels_yolo' folder - contains 11,741 .txt files for annotations in YOLO format. In these .txt files, the OBB is represented as (class_index, x1, y1, x2, y2, x3, y3, x4, y4), where ‘class_index’ denotes the target category, and ‘x1, y1, x2, y2, x3, y3, x4, y4’ are the normalized coordinates of the four corners of the bounding box.
·'class' txt file - record the classifications of river barriers and their indices, which correspond to the ‘class_index’.
Note that each folder splits into three subfolders: train (70%), test (20%), and val (10%).
https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
Imagine a world where machines see like humans—where your model doesn’t just scan pixels, but truly understands faces.
Face detection is the gateway to computer vision. From unlocking smartphones to powering surveillance systems, it’s the first step toward machines that understand the world as we do.
This dataset is crafted for creators, engineers, and visionaries who want to build models that don't just see — they recognize.
Originally prepared and exported using Roboflow, this dataset includes a diverse collection of face images, carefully annotated for object detection tasks. It’s designed to help you train accurate, real-time face detection models using cutting-edge deep learning architectures.
The structure is simple and scalable — optimized for quick experimentation and production-level deployment.
Focused and minimal. One class. One purpose.
Because face detection is more than bounding boxes — it’s about interaction, identity, and trust. Whether you’re building an AI that understands presence, or a system that reacts to people in real-time, this dataset gives you the data to begin.
This dataset is released under Creative Commons Zero (CC0 1.0). Use it freely — in research, in production, or anywhere your ideas take you.
Description:
This dataset has been specifically curated for cow pose estimation, designed to enhance animal behavior analysis and monitoring through computer vision techniques. The dataset is annotated with 12 keypoints on the cow’s body, enabling precise tracking of body movements and posture. It is structured in the COCO format, making it compatible with popular deep learning models like YOLOv8, OpenPose, and others designed for object detection and keypoint estimation tasks.
Applications:
This dataset is ideal for agricultural tech solutions, veterinary care, and animal behavior research. It can be used in various use cases such as health monitoring, activity tracking, and early disease detection in cattle. Accurate pose estimation can also assist in optimizing livestock management by understanding animal movement patterns and detecting anomalies in their gait or behavior.
Download Dataset
Keypoint Annotations:
The dataset includes the following 12 keypoints, strategically marked to represent significant anatomical features of cows:
Nose: Essential for head orientation and overall movement tracking.
Right Eye: Helps in head pose estimation.
Left Eye: Complements the right eye for accurate head direction.
Neck (side): Marks the side of the neck, key for understanding head and body coordination.
Left Front Hoof: Tracks the front left leg movement.
Right Front Hoof: Tracks the front right leg movement.
Left Back Hoof: Important for understanding rear leg motion.
Right Back Hoof: Completes the leg movement tracking for both sides.
Backbone (side): Vital for posture and overall body orientation analysis.
Tail Root: Used for tracking tail movements and posture shifts.
Backpose Center (near tail’s midpoint): Marks the midpoint of the back, crucial for body stability and movement analysis.
Stomach (center of side pose): Helps in identifying body alignment and weight distribution.
Dataset Format:
The data is structure in the COCO format, with annotations that include image coordinates for each keypoint. This format is highly suitable for integration into popular deep learning frameworks. Additionally, the dataset includes metadata like bounding boxes, image sizes, and segmentation masks to provide detail context for each cow in an image.
Compatibility:
This dataset is optimize for use with cutting-edge pose estimation models such as YOLOv8 and other keypoint detection models like DeepLabCut and HRNet, enabling efficient training and inference for cow pose tracking. It can be seamlessly integrate into existing machine learning pipelines for both real-time and post-processed analysis.
This dataset is sourced from Kaggle.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Oriented Bounding Box is a dataset for instance segmentation tasks - it contains Crops Path SYBD annotations for 1,518 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).