Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains 4976 planetary images of boulder fields located on Earth, Mars and Moon. The data was collected during the BOULDERING Marie Skłodowska-Curie Global fellowship between October 2021 and 2024. The data was already splitted into train, validation and test datasets, but feel free to re-organize the labels at your convenience.
For each image, all of the boulder outlines within the image were carefully mapped in QGIS. More information about the labelling procedure can be found in the following manuscript (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023JE008013). This dataset differs from the previous dataset included along with the manuscript https://zenodo.org/records/8171052, as it contains more mapped images, especially of boulder populations around young impact structures on the Moon (cold spots). In addition, the boulder outlines were also pre-processed so that it can be ingested directly in YOLOv8.
A description of what is what is given in the README.txt file (in addition in how to load the custom datasets in Detectron2 and YOLO). Most of the other files are mostly self-explanatory. Please see previous dataset or manuscript for more information. If you want to have more information about specific lunar and martian planetary images, the IDs of the images are still available in the name of the file. Use this ID to find more information (e.g., M121118602_00875_image.png, ID M121118602 ca be used on https://pilot.wr.usgs.gov/). I will also upload the raw data from which this pre-processed dataset was generated (see https://zenodo.org/records/14250970).
Thanks to this database, you can easily train a Detectron2 Mask R-CNN or YOLO instance segmentation models to automatically detect boulders.
How to cite:
Please refer to the "how to cite" section of the readme file of https://github.com/astroNils/YOLOv8-BeyondEarth.
Structure:
. └── boulder2024/ ├── jupyter-notebooks/ │ └── REGISTERING_BOULDER_DATASET_IN_DETECTRON2.ipynb ├── test/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── train/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── validation/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── detectron2_inst_seg_boulder_dataset.json ├── README.txt ├── yolo_inst_seg_boulder_dataset.yaml
detectron2_inst_seg_boulder_dataset.json
is a json file containing the masks as expected by Detectron2 (see https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html for more information on the format). In order to use this custom dataset, you need to register the dataset before using it in the training. There is an example how to do that in the jupyter-notebooks folder. You need to have detectron2, and all of its depedencies installed.
yolo_inst_seg_boulder_dataset.yaml
can be used as it is, however you need to update the paths in the .yaml file, to the test, train and validation folders. More information about the YOLO format can be found here (https://docs.ultralytics.com/datasets/segment/).
This collection contains the trained models and object detection results of 2 architectures found in the Detectron2 library, on the MS COCO val2017 dataset, under different JPEG compresion level Q = {5, 12, 19, 26, 33, 40, 47, 54, 61, 68, 75, 82, 89, 96} (14 levels per trained model). Architectures: F50 – Faster R-CNN on ResNet-50 with FPN R50 – RetinaNet on ResNet-50 with FPN Training type: D2 – Detectron2 Model ZOO pre-trained 1x model (90.000 iterations, batch 16) STD – standard 1x training (90.000 iterations) on original train2017 dataset Q20 – 1x training (90.000 iterations) on train2017 dataset degraded to Q=20 Q40 – 1x training (90.000 iterations) on train2017 dataset degraded to Q=40 T20 – extra 1x training on top of D2 on train2017 dataset degraded to Q=20 T40 – extra 1x training on top of D2 on train2017 dataset degraded to Q=40 Model and metrics files models_FasterRCNN.tar.gz (F50-STD, F50-Q20, …) models_RetinaNet.tar.gz (R50-STD, R50-Q20, …) For every model there are 3 files: config.yaml – the Detectron2 config of the model. model_final.pth – the weights (training snapshot) in PyTorch format. metrics.json – training metrics (like time, total loss, etc.) every 20 iterations. The D2 models were not included, because they are available from the Detectron2 Model ZOO, as faster_rcnn_R_50_FPN_1x (F50-D2) and retinanet_R_50_FPN_1x (R50-D2). Result files F50-results.tar.gz – results for Faster R-CNN models (inluding D2). R50-results.tar.gz – results for RetinaNet models (inluding D2). For every model there are 14 subdirectories, e.g. evaluator_dump_R50x1_005 through evaluator_dump_R50x1_096, for each of the JPEG Q values. Each such folder contains: coco_instances_results.json – all detected objects (image id, bounding box, class index and confidence). results.json – AP metrics as computed by COCO API. Source code for processing the data The data can be processed using our code, published at: https://github.com/tgandor/urban_oculus. Additional dependencies for the source code: COCO API Detectron2
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As a electronics and computer science student I often work with microcontroller and microcomputers. That's why when I looked for objects to build my own object detection dataset they instantly came to mind.
If you want to get started using the data-set feel free to check out my blog posts showing you how to train a model on the data-set with the Tensorflow Object Detection API or Detectron2.
A mostly up to date mirror of 10.75.129.40/DataInsights/GE-medicalimaging-train.git (Can be ahead since using it for testing)
Only for Genpact DS Team.
Detectron2 modularized codebase for training+prediction+submission+visualization on Kaggle's
https://opendatacommons.org/licenses/dbcl/1-0/https://opendatacommons.org/licenses/dbcl/1-0/
Mini Traffic Detection dataset comprises 8 classes with 30 instances each, divided into 70% for training and 30% for validation. Primarily designed for computer vision tasks, it focuses on traffic object detection. It's an excellent choice for transfer learning with Detectron2 for custom object detection and segmentation projects. The dataset includes classes such as bicycle, bus, car, motorcycle, person, traffic_light, truck, and stop_sign.
This dataset includes all letters A through Z in American Sign Language labeled with polygon labels. See this blog post for how to train with Detectron2: https://blog.roboflow.com/p/4482cb2b-f378-48f6-bd58-df2b784670cf/
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This project aims to train Custam Plant disease dataset on Faster RCNN using Detectron2
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
TableBank
TableBank is a new image-based table detection and recognition dataset built with novel weak supervision from Word and Latex documents on the internet, contains 417K high-quality labeled tables.
News
We update the license to Apache-2.0. We release an official split for the train/val/test datasets and re-train both of the Table Detection and Table Structure Recognition models using Detectron2 and OpenNMT tools. The benchmark results have been updated. A new… See the full description on the dataset page: https://huggingface.co/datasets/liminghao1630/TableBank.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description from the SaRNet: A Dataset for Deep Learning Assisted Search and Rescue with Satellite Imagery GitHub Repository * The "Note" was added by the Roboflow team.
This is a single class dataset consisting of tiles of satellite imagery labeled with potential 'targets'. Labelers were instructed to draw boxes around anything they suspect may a paraglider wing, missing in a remote area of Nevada. Volunteers were shown examples of similar objects already in the environment for comparison. The missing wing, as it was found after 3 weeks, is shown below.
https://michaeltpublic.s3.amazonaws.com/images/anomaly_small.jpg" alt="anomaly">
The dataset contains the following:
Set | Images | Annotations |
---|---|---|
Train | 1808 | 3048 |
Validate | 490 | 747 |
Test | 254 | 411 |
Total | 2552 | 4206 |
The data is in the COCO format, and is directly compatible with faster r-cnn as implemented in Facebook's Detectron2.
Download the data here: sarnet.zip
Or follow these steps
# download the dataset
wget https://michaeltpublic.s3.amazonaws.com/sarnet.zip
# extract the files
unzip sarnet.zip
***Note* with Roboflow, you can download the data here** (original, raw images, with annotations): https://universe.roboflow.com/roboflow-public/sarnet-search-and-rescue/ (download v1, original_raw-images) * Download the dataset in COCO JSON format, or another format of choice, and import them to Roboflow after unzipping the folder to get started on your project.
Get started with a Faster R-CNN model pretrained on SaRNet: SaRNet_Demo.ipynb
Source code for the paper is located here: SaRNet_train_test.ipynb
@misc{thoreau2021sarnet,
title={SaRNet: A Dataset for Deep Learning Assisted Search and Rescue with Satellite Imagery},
author={Michael Thoreau and Frazer Wilson},
year={2021},
eprint={2107.12469},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
The source data was generously provided by Planet Labs, Airbus Defence and Space, and Maxar Technologies.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
We release a human-centric synthetic data generator PeopleSansPeople which contains simulation-ready 3D human assets, a parameterized lighting and camera system, and generates 2D and 3D bounding box, instance and semantic segmentation, and COCO pose labels. Using PeopleSansPeople, we performed benchmark synthetic data training using a Detectron2 Keypoint R-CNN variant [1]. We found that pre-training a network using synthetic data and fine-tuning on target real-world data (few-shot transfer to limited subsets of COCO-person train [2]) resulted in a keypoint AP of 60.37±0.48 (COCO test-dev2017) outperforming models trained with the same real data alone (keypoint AP of 55.80) and pre-trained with ImageNet (keypoint AP of 57.50). This freely-available data generator should enable a wide range of research into the emerging field of simulation to real transfer learning in the critical area of human-centric computer vision.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the context of this project, the samples for µ-FTIR analysis contained up to a few thousands particles. The integrated particle detection tool (Particle Wizard - OMNIC Picta) gave poor performances and an AI segmentation tool was needed. Using this dataset, we trained a Detectron2 neural network that was used within GEPARD, an open source software used to improve Raman and FTIR target acquisition and data analysis. With Roboflow, it is possible to export this dataset to various format and use these data to train different architecture of segmentation neural networks.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains 4976 planetary images of boulder fields located on Earth, Mars and Moon. The data was collected during the BOULDERING Marie Skłodowska-Curie Global fellowship between October 2021 and 2024. The data was already splitted into train, validation and test datasets, but feel free to re-organize the labels at your convenience.
For each image, all of the boulder outlines within the image were carefully mapped in QGIS. More information about the labelling procedure can be found in the following manuscript (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023JE008013). This dataset differs from the previous dataset included along with the manuscript https://zenodo.org/records/8171052, as it contains more mapped images, especially of boulder populations around young impact structures on the Moon (cold spots). In addition, the boulder outlines were also pre-processed so that it can be ingested directly in YOLOv8.
A description of what is what is given in the README.txt file (in addition in how to load the custom datasets in Detectron2 and YOLO). Most of the other files are mostly self-explanatory. Please see previous dataset or manuscript for more information. If you want to have more information about specific lunar and martian planetary images, the IDs of the images are still available in the name of the file. Use this ID to find more information (e.g., M121118602_00875_image.png, ID M121118602 ca be used on https://pilot.wr.usgs.gov/). I will also upload the raw data from which this pre-processed dataset was generated (see https://zenodo.org/records/14250970).
Thanks to this database, you can easily train a Detectron2 Mask R-CNN or YOLO instance segmentation models to automatically detect boulders.
How to cite:
Please refer to the "how to cite" section of the readme file of https://github.com/astroNils/YOLOv8-BeyondEarth.
Structure:
. └── boulder2024/ ├── jupyter-notebooks/ │ └── REGISTERING_BOULDER_DATASET_IN_DETECTRON2.ipynb ├── test/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── train/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── validation/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── detectron2_inst_seg_boulder_dataset.json ├── README.txt ├── yolo_inst_seg_boulder_dataset.yaml
detectron2_inst_seg_boulder_dataset.json
is a json file containing the masks as expected by Detectron2 (see https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html for more information on the format). In order to use this custom dataset, you need to register the dataset before using it in the training. There is an example how to do that in the jupyter-notebooks folder. You need to have detectron2, and all of its depedencies installed.
yolo_inst_seg_boulder_dataset.yaml
can be used as it is, however you need to update the paths in the .yaml file, to the test, train and validation folders. More information about the YOLO format can be found here (https://docs.ultralytics.com/datasets/segment/).