Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains 4976 planetary images of boulder fields located on Earth, Mars and Moon. The data was collected during the BOULDERING Marie Skłodowska-Curie Global fellowship between October 2021 and 2024. The data was already splitted into train, validation and test datasets, but feel free to re-organize the labels at your convenience. For each image, all of the boulder outlines within the image were carefully mapped in QGIS. More information about the labelling procedure can be found in the following manuscript (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023JE008013). This dataset differs from the previous dataset included along with the manuscript https://zenodo.org/records/8171052, as it contains more mapped images, especially of boulder populations around young impact structures on the Moon (cold spots). In addition, the boulder outlines were also pre-processed so that it can be ingested directly in YOLOv8. A description of what is what is given in the README.txt file (in addition in how to load the custom datasets in Detectron2 and YOLO). Most of the other files are mostly self-explanatory. Please see previous dataset or manuscript for more information. If you want to have more information about specific lunar and martian planetary images, the IDs of the images are still available in the name of the file. Use this ID to find more information (e.g., M121118602_00875_image.png, ID M121118602 ca be used on https://pilot.wr.usgs.gov/). I will also upload the raw data from which this pre-processed dataset was generated (see https://zenodo.org/records/14250970). Thanks to this database, you can easily train a Detectron2 Mask R-CNN or YOLO instance segmentation models to automatically detect boulders. How to cite: Please refer to the "how to cite" section of the readme file of https://github.com/astroNils/YOLOv8-BeyondEarth. Structure: . └── boulder2024/ ├── jupyter-notebooks/ │ └── REGISTERING_BOULDER_DATASET_IN_DETECTRON2.ipynb ├── test/ │ └── images/ │ ├──
Facebook
TwitterThis dataset was created by Y-Haneji
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example segmentation data-set for my image segmentation articles.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This collection contains the trained models and object detection results of 2 architectures found in the Detectron2 library, on the MS COCO val2017 dataset, under different JPEG compresion level Q = {5, 12, 19, 26, 33, 40, 47, 54, 61, 68, 75, 82, 89, 96} (14 levels per trained model). Architectures: F50 – Faster R-CNN on ResNet-50 with FPN R50 – RetinaNet on ResNet-50 with FPN Training type: D2 – Detectron2 Model ZOO pre-trained 1x model (90.000 iterations, batch 16) STD – standard 1x training (90.000 iterations) on original train2017 dataset Q20 – 1x training (90.000 iterations) on train2017 dataset degraded to Q=20 Q40 – 1x training (90.000 iterations) on train2017 dataset degraded to Q=40 T20 – extra 1x training on top of D2 on train2017 dataset degraded to Q=20 T40 – extra 1x training on top of D2 on train2017 dataset degraded to Q=40 Model and metrics files models_FasterRCNN.tar.gz (F50-STD, F50-Q20, …) models_RetinaNet.tar.gz (R50-STD, R50-Q20, …) For every model there are 3 files: config.yaml – the Detectron2 config of the model. model_final.pth – the weights (training snapshot) in PyTorch format. metrics.json – training metrics (like time, total loss, etc.) every 20 iterations. The D2 models were not included, because they are available from the Detectron2 Model ZOO, as faster_rcnn_R_50_FPN_1x (F50-D2) and retinanet_R_50_FPN_1x (R50-D2). Result files F50-results.tar.gz – results for Faster R-CNN models (inluding D2). R50-results.tar.gz – results for RetinaNet models (inluding D2). For every model there are 14 subdirectories, e.g. evaluator_dump_R50x1_005 through evaluator_dump_R50x1_096, for each of the JPEG Q values. Each such folder contains: coco_instances_results.json – all detected objects (image id, bounding box, class index and confidence). results.json – AP metrics as computed by COCO API. Source code for processing the data The data can be processed using our code, published at: https://github.com/tgandor/urban_oculus. Additional dependencies for the source code: COCO API Detectron2
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As a electronics and computer science student I often work with microcontroller and microcomputers. That's why when I looked for objects to build my own object detection dataset they instantly came to mind.
If you want to get started using the data-set feel free to check out my blog posts showing you how to train a model on the data-set with the Tensorflow Object Detection API or Detectron2.
Facebook
TwitterThis dataset includes all letters A through Z in American Sign Language labeled with polygon labels. See this blog post for how to train with Detectron2: https://blog.roboflow.com/p/4482cb2b-f378-48f6-bd58-df2b784670cf/
Facebook
TwitterA mostly up to date mirror of 10.75.129.40/DataInsights/GE-medicalimaging-train.git (Can be ahead since using it for testing)
Only for Genpact DS Team.
Detectron2 modularized codebase for training+prediction+submission+visualization on Kaggle's
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
EfficientDet (PyTorch) This is a work in progress PyTorch implementation of EfficientDet.
It is based on the
official Tensorflow implementation by Mingxing Tan and the Google Brain team paper by Mingxing Tan, Ruoming Pang, Quoc V. Le EfficientDet: Scalable and Efficient Object Detection I am aware there are other PyTorch implementations. Their approach didn't fit well with my aim to replicate the Tensorflow models closely enough to allow weight ports while still maintaining a PyTorch feel and a high degree of flexibility for future additions. So, this is built from scratch and leverages my previous EfficientNet work.
Updates / Tasks 2020-4-15 Taking a pause on training, some high priority things came up. There are signs of life on the training branch, was working the basic augs before priority switch, loss fn appeared to be doing something sane with distributed training working, no proper eval yet, init not correct yet. I will get to it, with SOTA training config and good performance as the end goal (as with my EfficientNet work).
2020-04-11 Cleanup post-processing. Less code and a five-fold throughput increase on the smaller models. D0 running > 130 img/s on a single 2080Ti, D1 > 130 img/s on dual 2080Ti up to D7 @ 8.5 img/s.
2020-04-10 Replace generate_detections with PyTorch impl using torchvision batched_nms. Significant performance increase with minor (+/-.001 mAP) score differences. Quite a bit faster than original TF impl on a GPU now.
2020-04-09 Initial code with working validation posted. Yes, it's a little slow, but I think faster than the official impl on a GPU if you leave AMP enabled. Post processing needs some love.
Core Tasks Feature extraction from my EfficientNet implementations (https://github.com/rwightman/gen-efficientnet-pytorch or https://github.com/rwightman/pytorch-image-models) Low level blocks / helpers (SeparableConv, create_pool2d (same padding), etc) PyTorch implementation of BiFPN, BoxNet, ClassNet modules and related submodules Port Tensorflow checkpoints to PyTorch -- initial D1 checkpoint converted, state_dict loaded, on to validation.... Basic MS COCO validation script Temporary (hacky) COCO dataset and transform Port reference TF anchor and object detection code Verify model output sanity Integrate MSCOCO eval metric calcs Some cleanup, testing Submit to test-dev server, all good Add torch hub support and pretrained URL based weight download Change module dependencies from 'timm' to minimal 'geffnet' for backbone, bring some of the layers here leaving as timm for now, as the training code will use many timm functions that I leverage to reproduce SOTA EfficientNet training in PyTorch Remove redundant bias layers that exist in the official impl and weights Add visualization support Performance improvements, numpy TF detection code -> optimized PyTorch Verify/fix Torchscript and ONNX export compatibility Possible Future Tasks Training (object detection) reimplementation w/ Rand/AutoAugment, etc Training (semantic segmentation) experiments Integration with Detectron2 / MMDetection codebases Addition and cleanup of EfficientNet based U-Net and DeepLab segmentation models that I've used in past projects Addition and cleanup of OpenImages dataset/training support from a past project Exploration of instance segmentation possibilities... If you are an organization is interested in sponsoring and any of this work, or prioritization of the possible future directions interests you, feel free to contact me (issue, LinkedIn, Twitter, hello at rwightman dot com). I will setup a github sponser if there is any interest.
Models Variant Download mAP (val2017) mAP (test-dev2017) mAP (Tensorflow official test-dev2017) D0 tf_efficientdet_d0.pth 32.8 TBD 33.8 D1 tf_efficientdet_d1.pth 38.5 TBD 39.6 D2 tf_efficientdet_d2.pth 42.0 42.5 43 D3 tf_efficientdet_d3.pth 45.3 TBD 45.8 D4 tf_efficientdet_d4.pth 48.3 TBD 49.4 D5 tf_efficientdet_d5.pth 49.6 TBD 50.7 D6 tf_efficientdet_d6.pth 50.6 TBD 51.7 D7 tf_efficientdet_d7.pth 50.9 51.2 52.2 Usage Environment Setup Tested in a Python 3.7 or 3.8 conda environment in Linux with:
PyTorch 1.4 PyTorch Image Models (timm) 0.1.20, pip install timm or local install from (https://github.com/rwightman/pytorch-image-models) Apex AMP master (as of 2020-04) NOTE - There is a conflict/bug with Numpy 1.18+ and pycocotools, force install numpy <= 1.17.5 or the coco eval will fail, the validation script will still save the output JSON and that can be run through eval again later.
Dataset Setup MSCOCO 2017 validation data:
wget http://images.cocodataset.org/zips/val2017.zip wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip unzip val2017.zip unzip annotations_trainval2017.zip MSCOCO 2017 test-dev data:
wget http://images.cocodataset.org/zips/test2017.zip unzip -q test2017.zip wget http://images.cocodat...
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This project aims to train Custam Plant disease dataset on Faster RCNN using Detectron2
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset is structured with images split into directories and no downscaling was done.
The following notebook explains how to convert custom annotations to COCO format:
https://www.kaggle.com/sreevishnudamodaran/build-custom-coco-annotations-512x512-tiled
- coco_train
- images(contains images in jpg format)
- original_tiff_image_name
- tile_column_number
- image
.
.
.
.
.
.
.
.
.
- train.json (contains all the segmentation annotations in coco
- format with proper relative path of the images)
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
We release a human-centric synthetic data generator PeopleSansPeople which contains simulation-ready 3D human assets, a parameterized lighting and camera system, and generates 2D and 3D bounding box, instance and semantic segmentation, and COCO pose labels. Using PeopleSansPeople, we performed benchmark synthetic data training using a Detectron2 Keypoint R-CNN variant [1]. We found that pre-training a network using synthetic data and fine-tuning on target real-world data (few-shot transfer to limited subsets of COCO-person train [2]) resulted in a keypoint AP of 60.37±0.48 (COCO test-dev2017) outperforming models trained with the same real data alone (keypoint AP of 55.80) and pre-trained with ImageNet (keypoint AP of 57.50). This freely-available data generator should enable a wide range of research into the emerging field of simulation to real transfer learning in the critical area of human-centric computer vision.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description from the SaRNet: A Dataset for Deep Learning Assisted Search and Rescue with Satellite Imagery GitHub Repository * The "Note" was added by the Roboflow team.
This is a single class dataset consisting of tiles of satellite imagery labeled with potential 'targets'. Labelers were instructed to draw boxes around anything they suspect may a paraglider wing, missing in a remote area of Nevada. Volunteers were shown examples of similar objects already in the environment for comparison. The missing wing, as it was found after 3 weeks, is shown below.
https://michaeltpublic.s3.amazonaws.com/images/anomaly_small.jpg" alt="anomaly">
The dataset contains the following:
| Set | Images | Annotations |
|---|---|---|
| Train | 1808 | 3048 |
| Validate | 490 | 747 |
| Test | 254 | 411 |
| Total | 2552 | 4206 |
The data is in the COCO format, and is directly compatible with faster r-cnn as implemented in Facebook's Detectron2.
Download the data here: sarnet.zip
Or follow these steps
# download the dataset
wget https://michaeltpublic.s3.amazonaws.com/sarnet.zip
# extract the files
unzip sarnet.zip
***Note* with Roboflow, you can download the data here** (original, raw images, with annotations): https://universe.roboflow.com/roboflow-public/sarnet-search-and-rescue/ (download v1, original_raw-images) * Download the dataset in COCO JSON format, or another format of choice, and import them to Roboflow after unzipping the folder to get started on your project.
Get started with a Faster R-CNN model pretrained on SaRNet: SaRNet_Demo.ipynb
Source code for the paper is located here: SaRNet_train_test.ipynb
@misc{thoreau2021sarnet,
title={SaRNet: A Dataset for Deep Learning Assisted Search and Rescue with Satellite Imagery},
author={Michael Thoreau and Frazer Wilson},
year={2021},
eprint={2107.12469},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
The source data was generously provided by Planet Labs, Airbus Defence and Space, and Maxar Technologies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the context of this project, the samples for µ-FTIR analysis contained up to a few thousands particles. The integrated particle detection tool (Particle Wizard - OMNIC Picta) gave poor performances and an AI segmentation tool was needed. Using this dataset, we trained a Detectron2 neural network that was used within GEPARD, an open source software used to improve Raman and FTIR target acquisition and data analysis. With Roboflow, it is possible to export this dataset to various format and use these data to train different architecture of segmentation neural networks.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23345571%2F4471e4ade50676d782d4787f77aa08ad%2F1000_F_256252609_6WIHRGbpzSaVQwioubxwgXdSJTNONNcK.jpg?generation=1739209341333909&alt=media" alt="">
This dataset contains 2,700 augmented images, organized into training and validation folders, and focuses on detecting potholes, cracks, and open manholes on roads. To improve the robustness and generalization capability of detection models, the dataset has been augmented using various techniques that enhance data diversity. Annotations are available for all three categories, making the dataset fully compatible with both YOLO and Faster R-CNN architectures. Specifically, it includes YOLO format (.txt) files for use with YOLOv5, YOLOv7, and YOLOv8, as well as COCO JSON annotations suitable for Faster R-CNN, Detectron2, and MMDetection frameworks. Additionally, the dataset directory contains separate subfolders for each class—potholes, cracks, and open manholes—along with their respective annotation files, which facilitates easier access and class-wise analysis. Overall, this dataset is ready for direct use in modern object detection pipelines.
Included in the Dataset: - Bounding Box Annotations in YOLO Format (.txt files) - Format: YOLOv8 & YOLO11 compatible - Purpose: Ready for training YOLO-based object detection models
Folder Structure Organized into:
Dual Format Support
Use Cases Targeted
Here's a clear breakdown of the folder structure:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23345571%2F023b40c98bf858c58394d6ed2393bfc3%2FScreenshot%202025-05-01%20202438.png?generation=1746109541780835&alt=media" alt="">
Facebook
TwitterThis dataset contains preprocessed annotations for the IP102 Insect Pest Recognition Dataset converted to COCO format, making it ready for object detection models like DETR, Faster R-CNN, YOLO, and other modern detectors.
IP102 is a large-scale benchmark dataset for insect pest recognition containing: - 75,222 images of insect pests - 102 categories of agricultural pests - Images collected from real agricultural scenarios
This dataset provides:
- train_annotations.json - Training set annotations in COCO format
- val_annotations.json - Validation set annotations in COCO format
- test_annotations.json (optional) - Test set annotations
Annotations follow the standard COCO Object Detection format:
json
{
"images": [
{
"id": 1,
"file_name": "image_001.jpg",
"width": 640,
"height": 480
}
],
"annotations": [
{
"id": 1,
"image_id": 1,
"category_id": 5,
"bbox": [x, y, width, height],
"area": 12345,
"iscrowd": 0
}
],
"categories": [
{
"id": 1,
"name": "rice_leaf_roller",
"supercategory": "insect"
}
]
}
import json
from pycocotools.coco import COCO
# Load annotations
with open('/kaggle/input/ip102-coco-annotations/train_annotations.json') as f:
coco_data = json.load(f)
# Or use COCO API
coco = COCO('/kaggle/input/ip102-coco-annotations/train_annotations.json')
print(f"Number of images: {len(coco_data['images'])}")
print(f"Number of annotations: {len(coco_data['annotations'])}")
print(f"Number of categories: {len(coco_data['categories'])}")
If you use this dataset, please cite the original IP102 paper:
@article{wu2019ip102,
title={IP102: A Large-Scale Benchmark Dataset for Insect Pest Recognition},
author={Wu, Xiaoping and Zhan, Chi and Lai, Yu-Kun and Cheng, Ming-Ming and Yang, Jufeng},
journal={CVPR},
year={2019}
}
Original dataset by Wu et al. (CVPR 2019). This is a format conversion for easier integration with modern detection frameworks.
Ready to train your insect detection model! 🐛🔍 ```
object detection
computer vision
agriculture
coco format
insect recognition
pest detection
deep learning
detr
dataset
annotation
CC BY-NC-SA 4.0 (same as original IP102)
ou ``` Database: Open Database, Contents: © Original Authors
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains 4976 planetary images of boulder fields located on Earth, Mars and Moon. The data was collected during the BOULDERING Marie Skłodowska-Curie Global fellowship between October 2021 and 2024. The data was already splitted into train, validation and test datasets, but feel free to re-organize the labels at your convenience. For each image, all of the boulder outlines within the image were carefully mapped in QGIS. More information about the labelling procedure can be found in the following manuscript (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023JE008013). This dataset differs from the previous dataset included along with the manuscript https://zenodo.org/records/8171052, as it contains more mapped images, especially of boulder populations around young impact structures on the Moon (cold spots). In addition, the boulder outlines were also pre-processed so that it can be ingested directly in YOLOv8. A description of what is what is given in the README.txt file (in addition in how to load the custom datasets in Detectron2 and YOLO). Most of the other files are mostly self-explanatory. Please see previous dataset or manuscript for more information. If you want to have more information about specific lunar and martian planetary images, the IDs of the images are still available in the name of the file. Use this ID to find more information (e.g., M121118602_00875_image.png, ID M121118602 ca be used on https://pilot.wr.usgs.gov/). I will also upload the raw data from which this pre-processed dataset was generated (see https://zenodo.org/records/14250970). Thanks to this database, you can easily train a Detectron2 Mask R-CNN or YOLO instance segmentation models to automatically detect boulders. How to cite: Please refer to the "how to cite" section of the readme file of https://github.com/astroNils/YOLOv8-BeyondEarth. Structure: . └── boulder2024/ ├── jupyter-notebooks/ │ └── REGISTERING_BOULDER_DATASET_IN_DETECTRON2.ipynb ├── test/ │ └── images/ │ ├──