This dataset was created by tn
This Project consists of two datasets, both of aerial images and videos of dolphins, being taken by drones. The data was captured from few places (Italy and Israel coast lines).
The aim of the project is to examine automated dolphins detection and tracking from aerial surveys.
The project description, details and results are presented in the paper (link to the paper).
Each dataset was organized and set for a different phase of the project. Each dataset is located in a different zip file:
1. Detection - Detection.zip
2. Tracking - Tracking.zip
Further information about the datasets' content and annotation format is below.
* In aim to watch each file content, use the preview option, in addition a description appears later on this section.
Detection Dataset
This dataset contains 1125 aerial images, while an image can contain several dolphins.
The detection phase of the project is done using RetinaNet, supervised deep learning based algorithm, with the implementation of Keras RetinaNet. Therefore, the data was divided into three parts - Train, Validation and Test. The relations is 70%, 15%, 15% respectively.
The annotation format follows the requested format of that implementation (Keras RetinaNet). Each object, which is a dolphin, is annotated as a bounding box coordinates and a class. For this project, the dolphins were not distinguished into species, therefore, a dolphin object is annotated as a bounding box, and classified as a 'Dolphin'. Detection zip file includes:
*The annotation format is detailed in Annotation section.
Detection zip file content:
Detection
|——————train_set (images)
|——————train_set.csv
|——————validation_set (images)
|——————train_set.csv
|——————test_set (images)
|——————train_set.csv
└——————class_mapping.csv
Tracking
This dataset contains 5 short videos (10-30 seconds), which were trimmed from a longer aerial videos, captured from a drone.
The tracking phase of the project is done using two metrics:
Both metrics demand the videos' frames sequence as an input. Therefore, the videos' frames were extracted. The first frame was annotated manually for initialization, and the algorithms track accordingly. Same as the Detection dataset, each frame can includes several objects (dolphins).
For annotation consistency, the videos' frames sequences were annotated similar to the Detection Dataset above, (details can be found in Annotation section). Each video's frames annotations separately. Therefore, Tracking zip file contains a folder for each video (5 folders in total), named after the video's file name.
Each video folder contains:
The examined videos description and details are displayed in 'Videos Description.xlsx' file. Use the preview option for displaying its content.
Tracking zip file content:
Tracking
|——————DJI_0195_trim_0015_0045
| └——————frames (images)
| └——————annotations_DJI_0195_trim_0015_0045.csv
| └——————class_mapping_DJI_0195_trim_0015_0045.csv
| └——————DJI_0195_trim_0015_0045.MP4
|——————DJI_0395_trim_0010_0025
| └——————frames (images)
| └——————annotations_DJI_0395_trim_0010_0025.csv
| └——————class_mapping_DJI_0395_trim_0010_0025.csv
| └——————DJI_0195_trim_0015_0045.MP4
|——————DJI_0395_trim_00140_00150
| └——————frames (images)
| └——————annotations_DJI_0395_trim_00140_00150.csv
| └——————class_mapping_DJI_0395_trim_00140_00150.csv
| └——————DJI_0395_trim_00140_00150.MP4
|——————DJI_0395_trim_0055_0085
| └——————frames (images)
| └——————annotations_DJI_0395_trim_0055_0085.csv
| └——————class_mapping_DJI_0395_trim_0055_0085.csv
| └——————DJI_0395_trim_0055_0085.MP4
└——————HighToLow_trim_0045_0070
└—————frames (images)
└—————annotations_HighToLow_trim_0045_0070.csv
└—————class_mapping_HighToLow_trim_0045_0070.csv
└—————HighToLow_trim_0045_0070.MP4
Annotations format
Both datasets have similar annotation format which is described below. The data annotation format, of both datasets, follows the requested format of Keras RetinaNet Implementation, which was used for training in the Dolphins Detection phase of the project.
Each object (dolphin) is annotated by a bounding box left-top and right-bottom coordinates and a class. Each image or frame can includes several objects. All data was annotated using Labelbox application.
For each subset (Train, Validation and Test of Detection dataset, and each video of Tracking Dataset) there are two corresponded CSV files:
Each line in the Annotations CSV file contains an annotation (bounding box) in an image or frame.
The format of each line of the CSV annotation is:
path/to/image.jpg,x1,y1,x2,y2,class_name
An example from `train_set.csv`:
.\train_set\1146_20170730101_ce1_sc_GOPR3047 103.jpg,506,644,599,681,Dolphin
.\train_set\1146_20170730101_ce1_sc_GOPR3047 103.jpg,394,754,466,826,Dolphin
.\train_set\1147_20170730101_ce1_sc_GOPR3047 104.jpg,613,699,682,781,Dolphin
.\train_set\1147_20170730101_ce1_sc_GOPR3047 104.jpg,528,354,586,443,Dolphin
.\train_set\1147_20170730101_ce1_sc_GOPR3047 104.jpg,633,250,723,307,Dolphin
This defines a dataset with 2 images:
Each line in the Class Mapping CSV file contains a mapping:
class_name,id
An example:
Dolphin,0
Dataset includes aerial images and videos of dolphins, being taken by drones. The data was captured from few places (Italy and Israel coast lines).
The datsaset was collected in aim to perform automated dolphins detection of aerial images, and dolphins tracking from aerial videos.
The Project description and results in the github link, which describes and visualizes the paper (link to the paper).
The dataset includes two zip files:
Detection.zip
Tracking.zip
For both files, the data annotation format is identical, and described below.
In aim to watch each file content, use the preview option, in addition a description appears later on this section.
Annotations format
The data annotation format is inspired by the requested format of Keras RetinaNet Implementation, which was used for training in the Dolphins Detection Phase.
Each object is annotated by a bounding box. All data was annotated using Labelbox application.
For each subset there are two corresponded CSV files:
Annotation file
Class mapping file
Each line in the Annotations CSV file contains an annotation (bounding box) in an image or frame. The format of each line of the CSV annotation is:
path/to/image.jpg,x1,y1,x2,y2,class_name
path/to/image.jpg - a path to the image/frame
x1, y1 - image coordinates of the left upper corner of the bounding box
x2, y2 - image coordinates of the right bottom corner of the bounding box
class_name - class name of the annotated object
An example from train_set.csv
:
.\train_set\1146_20170730101_ce1_sc_GOPR3047 103.jpg,506,644,599,681,Dolphin .\train_set\1146_20170730101_ce1_sc_GOPR3047 103.jpg,394,754,466,826,Dolphin .\train_set\1147_20170730101_ce1_sc_GOPR3047 104.jpg,613,699,682,781,Dolphin .\train_set\1147_20170730101_ce1_sc_GOPR3047 104.jpg,528,354,586,443,Dolphin .\train_set\1147_20170730101_ce1_sc_GOPR3047 104.jpg,633,250,723,307,Dolphin
This defines a dataset with 2 images:
1146_20170730101_ce1_sc_GOPR3047 103.jpg
contains 2 bounding boxes which contains dolphins.
1146_20170730101_ce1_sc_GOPR3047 104.jpg
contains 3 bounding boxes which contains dolphins.
Each line in the Class Mapping CSV file contains a mapping:
class_name,id
An example:
Dolphin,0
Detection
The data for dolphins' detection is separated to three sub-directories: train, validation and test sets.
Since all files contain only one class - Dolphin
, there is one class_mapping.csv
which is can be used for all the three subsets.
Detection dataset folder includes:
A folder for each - train, validation and test sets, which includes the images
An annotations CSV file for each - train, validation and test sets
A class mapping csv file (for all the sets)
There is an annotation CSV file for each of the subset.
Tracking
For the tracking phase, trackers were examined and evaluated on 5 videos. Each video has its annotation and class mapping CSV files. In addition, extracted each video's frames are available in the frames directory.
Tracking dataset folder includes a folder for each video (5 videos), which contain:
frames directory, which includes extracted frames of the video
An annotations CSV
A class mapping csv file
The original video
The examined videos description and details:
Detection and Tracking dataset structure:
Detection |——————train_set (images) |——————train_set.csv |——————validation_set (images) |——————train_set.csv |——————test_set (images) |——————train_set.csv └——————class_mapping.csv
Tracking |——————DJI_0195_trim_0015_0045 | └——————frames (images) | └——————annotations_DJI_0195_trim_0015_0045.csv | └——————class_mapping_DJI_0195_trim_0015_0045.csv | └——————DJI_0195_trim_0015_0045.MP4 |——————DJI_0395_trim_0010_0025 | └——————frames (images) | └——————annotations_DJI_0395_trim_0010_0025.csv | └——————class_mapping_DJI_0395_trim_0010_0025.csv | └——————DJI_0195_trim_0015_0045.MP4 |——————DJI_0395_trim_00140_00150 | └——————frames (images) | └——————annotations_DJI_0395_trim_00140_00150.csv | └——————class_mapping_DJI_0395_trim_00140_00150.csv | └——————DJI_0395_trim_00140_00150.MP4 |——————DJI_0395_trim_0055_0085 | └——————frames (images) | └——————annotations_DJI_0395_trim_0055_0085.csv | └——————class_mapping_DJI_0395_trim_0055_0085.csv | └——————DJI_0395_trim_0055_0085.MP4 └——————HighToLow_trim_0045_0070 └—————frames (images) └—————annotations_HighToLow_trim_0045_0070.csv └—————class_mapping_HighToLow_trim_0045_0070.csv └—————HighToLow_trim_0045_0070.MP4
Not seeing a result you expected?
Learn how you can add new datasets to our index.
This dataset was created by tn