Ship detection plays an important role in port management, in terms of ship traffic, maritime rescue, cargo transportation and national defense. Satellite imagery provides data with high spatial and temporal resolution, which is useful for ship detection. SAR data has advantages over optical data, as microwaves are capable of penetrating clouds and can be used in all types of weather. SAR data is also useful for locating ships during storms for rescue missions. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputSentinel-1 C band SAR VV polarization band raster.OutputFeature class containing detected ships as polygons.Model architectureThis model uses the Faster R-CNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe model has an average precision score of 0.70 on our validation dataset.Training dataThe deep learning model was trained using the Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) which is prepared using Sentinel-1 imagery.Sample resultsHere are a few results from the model. To view more, see this story.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
time-independent
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
SAR Ship Dataset is a dataset for object detection tasks - it contains Ship annotations for 39,584 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
This dataset labeled by SAR experts was created using 102 Chinese Gaofen-3 images and 108 Sentinel-1 images. It consists of 39,729 ship chips(remove some repeat clips) of 256 pixels in both range and azimuth. These ships mainly have distinct scales and backgrounds. It can be used to develop object detectors for multi-scale and small object detection.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
SARFish is a Synthetic Aperture Radar (SAR) imagery dataset for the purpose of training, validating and testing supervised machine learning models on the tasks of ship detection, classification, and length regression. The SARFish dataset builds on the excellent work of the xView3-SAR dataset (2021) and consists of two parts:
Data - Extends the xView3-SAR dataset to include Single Look Complex (SLC) as well as Ground Range Detected (GRD) imagery data taken directly from the European Space… See the full description on the dataset page: https://huggingface.co/datasets/ConnorLuckettDSTG/SARFish.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
To cite the dataset please reference it as @INPROCEEDINGS{8124934, author={Li, Jianwei and Qu, Changwen and Shao, Jiaqi}, booktitle={2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA)}, title={Ship detection in SAR images based on an improved faster R-CNN}, year={2017}, volume={}, number={}, pages={1-6}, keywords={Marine vehicles;Feature extraction;Synthetic aperture radar;Proposals;Detectors;Image resolution;Deep learning;SAR;ship detection;Faster R-CNN}… See the full description on the dataset page: https://huggingface.co/datasets/agungpambudi/sar-ship-detection.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
As an outstanding method for ocean monitoring, Synthetic aperture radar (SAR) has received much attention from scholars in recent years. With the rapid advances in the field of SAR technology and image processing, significant progress has also been made in ship detection in SAR images. When dealing with large-scale ships on a wide sea surface, most existing algorithms can achieve great detection results. However, small ships in SAR images contain few feature information. It is difficult to detect them from the background clutter, and there is a problem of low detection rate and high false alarm. To improve the detection accuracy for small-scale ships, we propose an efficient ship detection model based on YOLOX, called YOLO-SD. First, Multi-Scale Convolution (MSC) is proposed to fuse feature information at different scales so as to resolve the problem of unbalanced semantic information in the lower layer and improve the ability of feature extraction. Further, the Feature Transformer Module (FTM) is designed to capture global features and link them to the context for the purpose of optimizing high-layer semantic information and ultimately achieving excellent detection performance. A large number of experiments on the HRSID and LS-SSDD-v1.0 show that YOLO-SD achieves a better detection performance than the baseline YOLOX. Compared with other excellent object detection models, YOLO-SD still has an edge in overall performance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
###the following abstract is also provided as a file using proper formatting###
Satellite-based Synthetic Aperture Radar (SAR) sensors offer the opportunity to observe the maritime domain, even during nighttime and under foggy or cloudy weather conditions. Depending on the nature of oceanographic observations, gaining information of position and movement of maritime objects is an essential element. Radar signatures of man-made maritime objects typically have extents of up to a few hundred meters. However, the transit of a moving ship can affect the ocean surface up to hundreds of kilometers creating large scale artefacts in SAR images, the so-called wake signatures. The published data is focused on the observation of moving ships by exploiting those wake signatures imaged by the SAR sensors.
The appearance of ship wakes in SAR imagery has been investigated for decades. Radar signatures of ship wakes are complex structures consisting of multiple wake components. Those wake components appear with different shapes and extents in SAR acquisitions, depending on various influencing parameters describing the present situation during the observation. Those influencing parameters are categorized into three types: ship properties, environmental conditions and image acquisition parameters.
Recently, the characteristic effect of the influencing parameters on the detectability of ship wakes has been modelled and systematically analyzed for the first time on the basis of this dataset, now available to the public. The results are published in the following journal publications [1, 2, 3, 4, 5, 6, 7] and all-encompassing in the following dissertation’s monography [8]. The published dataset has also been applied to develop the first Deep-Learning-based detector for individual wake components in SAR imagery [9].
This published dataset offers the following unique features:
The publication of this dataset shall enable users to,
[1] B. Tings and D. Velotto, "Comparison of ship wake detectability on C-band and X-band SAR," International Journal of Remote Sensing, vol. 39, no. 13, pp. 1-18, 2018, doi: 10.1080/01431161.2018.1425568.
[2] B. Tings, C. Bentes, D. Velotto and S. Voinov, "Modelling Ship Detectability Depending On TerraSAR-X-derived Metocean Parameters," CEAS Space Journal, vol. 11, p. 81–94, 2018, doi: 10.1007/s12567-018-0222-8.
[3] B. Tings, A. Pleskachevsky, D. Velotto and S. Jacobsen, "Extension of Ship Wake Detectability Model for Non-Linear Influences of Parameters Using Satellite Based X-Band Synthetic Aperture Radar," Remote Sensing, vol. 11, no. 5, pp. 1-20, 2019, doi: 10.3390/rs11050563.
[4] B. Tings, S. Jacobsen, S. Wiehle, E. Schwarz and H. Daedelow, "X-Band/C-Band-Comparison of Ship Wake Detectability," in EUSAR-Preprints 2020, Leipzig, 2020, doi: 10.20944/preprints202012.0480.v1.
[5] B. Tings, S. Wiehle and S. Jacobsen, "Ship wake component detectability on synthetic aperture radar (SAR)," in IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, 2020, doi: 10.1109/IGARSS39084.2020.9323097.
[6] B. Tings, "Non-Linear Modeling of Detectability of Ship Wake Components in Dependency to Influencing Parameters Using Spaceborne X-Band SAR," Remote Sensing, vol. 13, no. 2, p. 165, 2021, doi: 10.3390/rs13020165.
[7] B. Tings, A. Pleskachevsky and S. Wiehle, "Comparison of detectability of ship wake components between C-Band and X-Band synthetic aperture radar sensors operating under different slant ranges," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 196, pp. 306-324, 2023, doi: 10.1016/j.isprsjprs.2022.12.008 (corrigendum 10.1016/j.isprsjprs.2025.01.026).
[8] B. Tings, „Dissertation: Erkennung der Bug- und Heckwellen von Schiffen durch satellitenbasierte C-Band- und X-Band-Radarsensoren mit synthetischer Apertur,“ Helmut-Schmidt-Universität, Hamburg, 2024.
[9] B. Tings, Y.-J. Yang, C. Schnupfhagn and S. Jacobsen, "Tuning Detection of Ship Wakes by Detectability Modelling," 4th European Workshop on Maritime Systems, Resilience and Security 2024 (MARESEC 24), Bremerhaven, 2024, doi: 10.5281/zenodo.14524265.
[10] B. Tings, C. Bentes and S. Lehner, "Dynamically adapted ship parameter estimation using TerraSAR-X images," International Journal of Remote Sensing, pp. 1990-2015, 2016, doi: 10.1080/01431161.2015.1071898.
[11] B. J. Tetreault, "Use of the Automatic Identification System (AIS) for maritime domain awareness (MDA)," Proceedings of OCEANS 2005 MTS/IEEE, vol. 2, pp. 1590-1594, 2005, doi: 10.1109/OCEANS.2005.1639983.
[12] A. Pleskachevsky, B. Tings, S. Jacobsen, S. Wiehle, E. Schwarz and D. Krause, "A System for Near Real Time Monitoring of the Sea State using SAR Satellites," IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-18, 2024, doi: 10.1109/TGRS.2024.3419582.
[13] X.-M. Li and S. Lehner, "Algorithm for Sea Surface Wind Retrieval From TerraSAR-X and TanDEM-X Data," IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 5, pp. 2928-2939, 2014, doi: 10.1109/TGRS.2013.2267780.
[14] S. Jacobsen, X. Li, S. Lehner, J. Hieronimus and J. Schneemann, "Joint Offshore Wind Field Monitoring with Spaceborne SAR and Platform-Based Doppler LiDAR Measurements," International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 40, p. 959–966, 2015, doi: 10.5194/isprsarchives-XL-7-W3-959-2015.
[15] F. Monaldo, C. Jackson, X. Li and W. G. Pichel, "Preliminary Evaluation of Sentinel-1A Wind Speed Retrievals," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 6, pp. 2638-2642, 2016, doi: 10.1109/JSTARS.2015.2504324.
[16] W. C. Skamarock, J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, M. G. Duda, X.-Y. Huang, W. Wang and J. G. Powers, "A Description of the Advanced Research WRF Version 3," NCAR Technical Notes, Boulder, 2008, doi: 10.5065/D68S4MVH.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Ship SAR Less is a dataset for object detection tasks - it contains Ship annotations for 1,958 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The current challenges in Synthetic Aperture Radar (SAR) ship detection tasks revolve around handling significant variations in target sizes and managing high computational expenses, which hinder practical deployment on satellite or mobile airborne platforms. In response to these challenges, this research presents YOLOv7-LDS, a lightweight yet highly accurate SAR ship detection model built upon the YOLOv7 framework. In the core of YOLOv7-LDS’s architecture, we introduce a streamlined feature extraction network that strikes a delicate balance between detection precision and computational efficiency. This network is founded on Shufflenetv2 and incorporates Squeeze-and-Excitation (SE) attention mechanisms as its key elements. Additionally, in the Neck section, we introduce the Weighted Efficient Aggregation Network (DCW-ELAN), a fundamental feature extraction module that leverages Coordinate Attention (CA) and Depthwise Convolution (DWConv). This module efficiently aggregates features while preserving the ability to identify small-scale variations, ensuring top-quality feature extraction. Furthermore, we introduce a lightweight Spatial Pyramid Dilated Convolution Cross-Stage Partial Channel (LSPHDCCSPC) module. LSPHDCCSPC is a condensed version of the Spatial Pyramid Pooling Cross-Stage Partial Channel (SPPCSPC) module, incorporating Dilated Convolution (DConv) as a central component for extracting multi-scale information. The experimental results show that YOLOv7-LDS achieves a remarkable Mean Average Precision (mAP) of 99.1% and 95.8% on the SAR Ship Detection Dataset (SSDD) and the NWPU VHR-10 dataset with a parameter count (Params) of 3.4 million, a Giga Floating Point Operations Per Second (GFLOPs) of 6.1 and an Inference Time (IT) of 4.8 milliseconds. YOLOv7-LDS effectively strikes a fine balance between computational cost and detection performance, surpassing many of the current state-of-the-art object detection models. As a result, it offers a more resilient solution for maritime ship monitoring.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
SAR SSDD Dataset is a dataset for object detection tasks - it contains Objects annotations for 1,146 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic Aperture Radar (SAR), renowned for its all-weather monitoring capability and high-resolution imaging characteristics, plays a pivotal role in ocean resource exploration, environmental surveillance, and maritime security. It has become a fundamental technological support in marine science research and maritime management. However, existing SAR ship detection algorithms encounter two major challenges: limited detection accuracy and high computational cost, primarily due to the wide range of target scales, indistinct contour features, and complex background interference. To address these challenges, this paper proposes AC-YOLO, a novel lightweight SAR ship detection model based on YOLO11. Specifically, we design a lightweight cross-scale feature fusion module that adaptively fuses multi-scale feature information, enhancing small target detection while reducing model complexity. Additionally, we construct a hybrid attention enhancement module, integrating convolutional operations with a self-attention mechanism to improve feature discrimination without compromising computational efficiency. Furthermore, we propose an optimized bounding box regression loss function, the Minimum Point Distance Intersection over the Union (MPDIoU), which establishes multi-dimensional geometric metrics to accurately characterize discrepancies in overlap area, center distance, and scale variation between predicted and ground truth boxes. Experimental results demonstrate that, compared with the baseline YOLO11 model, AC-YOLO reduces parameter count by 30.0% and computational load by 15.6% on the SSDD dataset, with an average precision (AP) improvement of 1.2%; on the HRSID dataset, the AP increases by 1.5%. This model effectively reconciles the trade-off between complexity and detection accuracy, providing a feasible solution for deployment on edge computing platforms. The source code for the AC-YOLO model is available at: https://github.com/He-ship-sar/ACYOLO.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic Aperture Radar (SAR), renowned for its all-weather monitoring capability and high-resolution imaging characteristics, plays a pivotal role in ocean resource exploration, environmental surveillance, and maritime security. It has become a fundamental technological support in marine science research and maritime management. However, existing SAR ship detection algorithms encounter two major challenges: limited detection accuracy and high computational cost, primarily due to the wide range of target scales, indistinct contour features, and complex background interference. To address these challenges, this paper proposes AC-YOLO, a novel lightweight SAR ship detection model based on YOLO11. Specifically, we design a lightweight cross-scale feature fusion module that adaptively fuses multi-scale feature information, enhancing small target detection while reducing model complexity. Additionally, we construct a hybrid attention enhancement module, integrating convolutional operations with a self-attention mechanism to improve feature discrimination without compromising computational efficiency. Furthermore, we propose an optimized bounding box regression loss function, the Minimum Point Distance Intersection over the Union (MPDIoU), which establishes multi-dimensional geometric metrics to accurately characterize discrepancies in overlap area, center distance, and scale variation between predicted and ground truth boxes. Experimental results demonstrate that, compared with the baseline YOLO11 model, AC-YOLO reduces parameter count by 30.0% and computational load by 15.6% on the SSDD dataset, with an average precision (AP) improvement of 1.2%; on the HRSID dataset, the AP increases by 1.5%. This model effectively reconciles the trade-off between complexity and detection accuracy, providing a feasible solution for deployment on edge computing platforms. The source code for the AC-YOLO model is available at: https://github.com/He-ship-sar/ACYOLO.
http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0
The dataset is derived from Sentinel-2 Level-2A (L2A) satellite images and focuses on the marine domain over Danish fjords. It provides a comprehensive collection of ship wakes and background clutter (referred to as "no_wake_crop") for remote sensing applications. The dataset has undergone post-processing through the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm with a clip limit value of 0.12 and a tile size of 16x16. The dataset comprises four spectral bands: B2, B3, B4, and B8.
Ship wake detection serves as a cornerstone in a multitude of domains that are critical to both human and environmental well-being:
Navigational Safety: Understanding ship wakes can provide insights into water currents and traffic patterns. This is vital for ensuring the safe passage of marine vessels, particularly in narrow straits and busy ports.
Environmental Monitoring: The study of ship wakes can reveal the influence of vessels on aquatic ecosystems. For instance, excessive wake turbulence can lead to coastal erosion and can disrupt marine habitats.
Maritime Surveillance: Wake detection plays a crucial role in maintaining maritime security. Tracking the wakes of vessels can help in identifying illegal activities such as smuggling or unauthorized fishing.
Traditionally, the process of ship wake detection has largely been a manual endeavor or employed simplistic statistical algorithms. Analysts would sift through satellite or aerial images to identify ship wakes, a process that is both time-consuming and prone to human error. Even automated statistical methods often lack the robustness needed to differentiate between true wakes and false positives, such as aquatic plants or natural water disturbances.
The introduction of explainable AI (xAI) techniques brings another layer of sophistication to wake analysis. While traditional machine learning models may offer high performance, they often act as "black boxes," making it difficult to understand how they arrive at a certain conclusion. In a critical domain like navigational safety or maritime surveillance, the ability to interpret and understand model decisions is indispensable. xAI methods can make these machine learning models more transparent, providing insights into their decision-making processes, which in turn can aid in fine-tuning or fully trusting the models.
The inclusion of four key spectral bands—B2, B3, B4, and B8—offers the scope for multi-spectral analysis. Different bands can capture varying features of water and wake textures, thereby offering a richer feature set for machine learning models. We use these spectral bands as referred to in [Liu, Yingfei, Jun Zhao, and Yan Qin. "A novel technique for ship wake detection from optical images." Remote Sensing of Environment 258 (2021): 112375.]
It is important to note the fundamental differences between wakes captured in Synthetic Aperture Radar (SAR) images and those in optical imagery. In SAR images, narrow-V wakes often arise due to Bragg scattering, a phenomenon that does not exist at optical wavelengths. In optical images, bright lines close to turbulent wakes are actually foams generated by the interaction between the surface horizontal flow of turbulent wakes and the surrounding background waves (Ermakov et al., 2014; Milgram et al., 1993; Peltzer et al., 1992). This can make the detection of wakes in optical images more challenging as there are usually no bright lines near turbulent wakes, and Kelvin arms may also show dark contrast. Methods that solely rely on searching for a trough and peak pair, taking the trough as the turbulent wake, would miss many actual wakes and could also result in the identification of false wakes.
The application of the CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm to this dataset allows for enhanced local contrast, enabling subtle features to become more pronounced. This significantly aids machine learning algorithms in feature extraction, thereby improving their ability to distinguish between complex patterns.
In addition to wakes, the dataset contains samples labeled as "No-Wake," which include environmental clutter and clouds. These samples are crucial for training robust models that can differentiate wakes from similar-looking natural phenomena.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic Aperture Radar (SAR), renowned for its all-weather monitoring capability and high-resolution imaging characteristics, plays a pivotal role in ocean resource exploration, environmental surveillance, and maritime security. It has become a fundamental technological support in marine science research and maritime management. However, existing SAR ship detection algorithms encounter two major challenges: limited detection accuracy and high computational cost, primarily due to the wide range of target scales, indistinct contour features, and complex background interference. To address these challenges, this paper proposes AC-YOLO, a novel lightweight SAR ship detection model based on YOLO11. Specifically, we design a lightweight cross-scale feature fusion module that adaptively fuses multi-scale feature information, enhancing small target detection while reducing model complexity. Additionally, we construct a hybrid attention enhancement module, integrating convolutional operations with a self-attention mechanism to improve feature discrimination without compromising computational efficiency. Furthermore, we propose an optimized bounding box regression loss function, the Minimum Point Distance Intersection over the Union (MPDIoU), which establishes multi-dimensional geometric metrics to accurately characterize discrepancies in overlap area, center distance, and scale variation between predicted and ground truth boxes. Experimental results demonstrate that, compared with the baseline YOLO11 model, AC-YOLO reduces parameter count by 30.0% and computational load by 15.6% on the SSDD dataset, with an average precision (AP) improvement of 1.2%; on the HRSID dataset, the AP increases by 1.5%. This model effectively reconciles the trade-off between complexity and detection accuracy, providing a feasible solution for deployment on edge computing platforms. The source code for the AC-YOLO model is available at: https://github.com/He-ship-sar/ACYOLO.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
OpenSAR is a dataset for object detection tasks - it contains Ship annotations for 3,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The “bright shadow” is a distinct bright region visible in cross-polarized (HV or VH) SAR images of moving ships, absent in co-polarized (HH, VV) images. It consistently appears in cross-pol imagery but never in co-pol. When correctly velocity-matched, its position aligns with the ship’s true location. The azimuth distance (∆x) between the shifted ship image and its shadow can be used to estimate the ship’s across-track velocity (V_tr). This phenomenon is observed in both C-band (RADARSAT-2) and X-band (TerraSAR-X), unlike ship wake methods, which are more detectable in X-band. Additionally, it is independent of the incidence angle, making it effective even at low angles where sea clutter in co-pol images is significant.
Physical Mechanism Behind the Bright “Shadow”
• The bright “shadow” appears due to strong volume scattering from disturbances created by the ship’s movement on the ocean surface.
• It is hypothesized that wave breaking, whitecap, foam, and spray effects around a moving vessel cause this scattering.
• The bright shadow is azimuthally shifted from the actual ship location due to the Doppler effect induced by the ship’s motion.
The XShadowBright dataset containing 1,100 samples (50% train, 20% Validation, and 30% test) is generated by applying a series of spatial and noise-based augmentations to the original images coming from TerraSAR-X and their corresponding masks. These transformations help enhance the dataset by introducing variations in positioning, rotation, scaling, and noise while maintaining spatial consistency between the input image and its mask.
A shared augmentation pipeline is used to ensure identical transformations for both the input image and its corresponding mask. The following transformations are applied:
• Random Horizontal Flip (50%) – Simulates different viewing perspectives.
• Random Vertical Flip (50%) – Introduces further positional variations.
• Random Rotation (±40°) – Ensures rotational diversity while aligning interpolation.
• Random Resized Crop (Scale: 75% to 120%) – Introduces scaling variations while maintaining spatial structure.
• Random Affine Transform (Rotation ±20°, Translation ±10%, Scaling ±10%, Shear ±10°) – Simulates distortions due to imaging angles and sensor variations.
These augmentations preserve spatial coherence (using same_on_batch=True) and are crucial in scenarios such as polarimetric SAR-GMTI ship detection, where variations in imaging angles and object orientations need to be accounted for.
After spatial transformations, additional image-only augmentations are applied to introduce noise and blur effects, mimicking real-world SAR data distortions:
• Gaussian Blur (Kernel: 5×5, Sigma: 0.1–2.0) – Simulates motion blur or radar-induced defocus.
• Gaussian Noise (Mean=0.0, Std=0.03, Applied 50%) – Emulates SAR noise, enhancing robustness in detection models.
To prevent numerical instability, the transformed image is clamped (clamp(min=1e-6)), ensuring all pixel values remain valid.
Track the LTU SAR SHIP SAKIAI in real-time with AIS data. TRADLINX provides live vessel position, speed, and course updates. Search by MMSI: 277078000, IMO: 8727721
The Sentinel-1 ship sighting product contains information on ships that have been captured by the Sentinel-1 SAR instrument. With 10 m resolution the Sentinel-1 GRDH data is sufficient to identify ships and even derive approximate information on their size and orientation. Small boats can also be detected through their wake.
This ship sighting product provides - Sentinel-1 GRDH VV and VH band data around a ship position - A pixel list of pixels with a high probability of representing a ship - Ship position and track data
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The proposed AIS dataset encompasses a substantial temporal span of 20 months, spanning from April 2021 to December 2022. This extensive coverage period empowers analysts to examine long-term trends and variations in vessel activities. Moreover, it facilitates researchers in comprehending the potential influence of external factors, including weather patterns, seasonal variations, and economic conditions, on vessel traffic and behavior within the Finnish waters.
This dataset encompasses an extensive array of data pertaining to vessel movements and activities encompassing seas, rivers, and lakes. Anticipated to be comprehensive in nature, the dataset encompasses a diverse range of ship types, such as cargo ships, tankers, fishing vessels, passenger ships, and various other categories.
The AIS dataset exhibits a prominent attribute in the form of its exceptional granularity with a total of 2 293 129 345 data points. The provision of such granular information proves can help analysts to comprehend vessel dynamics and operations within the Finnish waters. It enables the identification of patterns and anomalies in vessel behavior and facilitates an assessment of the potential environmental implications associated with maritime activities.
Please cite the following publication when using the dataset:
TBD
The publication is available at: TBD
A preprint version of the publication is available at TBD
csv file structure
YYYY-MM-DD-location.csv
This file contains the received AIS position reports. The structure of the logged parameters is the following: [timestamp, timestampExternal, mmsi, lon, lat, sog, cog, navStat, rot, posAcc, raim, heading]
timestamp I beleive this is the UTC second when the report was generated by the electronic position system (EPFS) (0-59, or 60 if time stamp is not available, which should also be the default value, or 61 if positioning system is in manual input mode, or 62 if electronic position fixing system operates in estimated (dead reckoning) mode, or 63 if the positioning system is inoperative).
timestampExternal The timestamp associated with the MQTT message received from www.digitraffic.fi. It is assumed this timestamp is the Epoch time corresponding to when the AIS message was received by digitraffic.fi.
mmsi MMSI number, Maritime Mobile Service Identity (MMSI) is a unique 9 digit number that is assigned to a (Digital Selective Calling) DSC radio or an AIS unit. Check https://en.wikipedia.org/wiki/Maritime_Mobile_Service_Identity
lon Longitude, Longitude in 1/10 000 min (+/-180 deg, East = positive (as per 2's complement), West = negative (as per 2's complement). 181= (6791AC0h) = not available = default)
lat Latitude, Latitude in 1/10 000 min (+/-90 deg, North = positive (as per 2's complement), South = negative (as per 2's complement). 91deg (3412140h) = not available = default)
sog Speed over ground in 1/10 knot steps (0-102.2 knots) 1 023 = not available, 1 022 = 102.2 knots or higher
cog Course over ground in 1/10 = (0-3599). 3600 (E10h) = not available = default. 3 601-4 095 should not be used
navStat Navigational status, 0 = under way using engine, 1 = at anchor, 2 = not under command, 3 = restricted maneuverability, 4 = constrained by her draught, 5 = moored, 6 = aground, 7 = engaged in fishing, 8 = under way sailing, 9 = reserved for future amendment of navigational status for ships carrying DG, HS, or MP, or IMO hazard or pollutant category C, high speed craft (HSC), 10 = reserved for future amendment of navigational status for ships carrying dangerous goods (DG), harmful substances (HS) or marine pollutants (MP), or IMO hazard or pollutant category A, wing in ground (WIG); 11 = power-driven vessel towing astern (regional use); 12 = power-driven vessel pushing ahead or towing alongside (regional use); 13 = reserved for future use, 14 = AIS-SART (active), MOB-AIS, EPIRB-AIS 15 = undefined = default (also used by AIS-SART, MOB-AIS and EPIRB-AIS under test)
rot ROTAIS Rate of turn
0 to +126 = turning right at up to 708 deg per min or higher
0 to -126 = turning left at up to 708 deg per min or higher
Values between 0 and 708 deg per min coded by ROTAIS = 4.733 SQRT(ROTsensor) degrees per min where ROTsensor is the Rate of Turn as input by an external Rate of Turn Indicator (TI). ROTAIS is rounded to the nearest integer value.
+127 = turning right at more than 5 deg per 30 s (No TI available)
-127 = turning left at more than 5 deg per 30 s (No TI available)
-128 (80 hex) indicates no turn information available (default).
ROT data should not be derived from COG information.
posAcc Position accuracy, The position accuracy (PA) flag should be determined in accordance with the table below:
1 = high (<= 10 m)
0 = low (> 10 m)
0 = default
See https://www.navcen.uscg.gov/?pageName=AISMessagesA#RAIM
raim RAIM-flag Receiver autonomous integrity monitoring (RAIM) flag of electronic position fixing device; 0 = RAIM not in use = default; 1 = RAIM in use. See Table https://www.navcen.uscg.gov/?pageName=AISMessagesA#RAIM
Check https://en.wikipedia.org/wiki/Receiver_autonomous_integrity_monitoring
heading True heading, Degrees (0-359) (511 indicates not available = default)
YYYY-MM-DD-metadata.csv
This file contains the received AIS metadata: the ship static and voyage related data. The structure of the logged parameters is the following: [timestamp, destination, mmsi, callSign, imo, shipType, draught, eta, posType, pointA, pointB, pointC, pointD, name]
timestamp The timestamp associated with the MQTT message received from www.digitraffic.fi. It is assumed this timestamp is the Epoch time corresponding to when the AIS message was received by digitraffic.fi.
destination Maximum 20 characters using 6-bit ASCII; @@@@@@@@@@@@@@@@@@@@ = not available For SAR aircraft, the use of this field may be decided by the responsible administration
mmsi MMSI number, Maritime Mobile Service Identity (MMSI) is a unique 9 digit number that is assigned to a (Digital Selective Calling) DSC radio or an AIS unit. Check https://en.wikipedia.org/wiki/Maritime_Mobile_Service_Identity
callSign 7?=?6 bit ASCII characters, @@@@@@@ = not available = default Craft associated with a parent vessel, should use “A” followed by the last 6 digits of the MMSI of the parent vessel. Examples of these craft include towed vessels, rescue boats, tenders, lifeboats and liferafts.
imo 0 = not available = default – Not applicable to SAR aircraft
0000000001-0000999999 not used
0001000000-0009999999 = valid IMO number;
0010000000-1073741823 = official flag state number.
Check: https://en.wikipedia.org/wiki/IMO_number
shipType
0 = not available or no ship = default
1-99 = as defined below
100-199 = reserved, for regional use
200-255 = reserved, for future use Not applicable to SAR aircraft
Check https://www.navcen.uscg.gov/pdf/AIS/AISGuide.pdf and https://www.navcen.uscg.gov/?pageName=AISMessagesAStatic
draught In 1/10 m, 255 = draught 25.5 m or greater, 0 = not available = default; in accordance with IMO Resolution A.851 Not applicable to SAR aircraft, should be set to 0
eta Estimated time of arrival; MMDDHHMM UTC
Bits 19-16: month; 1-12; 0 = not available = default
Bits 15-11: day; 1-31; 0 = not available = default
Bits 10-6: hour; 0-23; 24 = not available = default
Bits 5-0: minute; 0-59; 60 = not available = default
For SAR aircraft, the use of this field may be decided by the responsible administration
posType Type of electronic position fixing device
0 = undefined (default)
1 = GPS
2 = GLONASS
3 = combined GPS/GLONASS
4 = Loran-C
5 = Chayka
6 = integrated navigation system
7 = surveyed
8 = Galileo,
9-14 = not used
15 = internal GNSS
pointA Reference point for reported position.
Also indicates the dimension of ship (m). For SAR aircraft, the use of this field may be decided by the responsible administration. If used it should indicate the maximum dimensions of the craft. As default should A = B = C = D be set to “0”
Check: https://www.navcen.uscg.gov/?pageName=AISMessagesAStatic#_Reference_point_for
pointB See above
pointC See above
pointD See above
name Maximum 20 characters 6 bit ASCII "@@@@@@@@@@@@@@@@@@@@" = not available = default The Name should be as shown on the station radio license. For SAR aircraft, it should be set to “SAR AIRCRAFT NNNNNNN” where NNNNNNN equals the aircraft registration number.
Ship detection plays an important role in port management, in terms of ship traffic, maritime rescue, cargo transportation and national defense. Satellite imagery provides data with high spatial and temporal resolution, which is useful for ship detection. SAR data has advantages over optical data, as microwaves are capable of penetrating clouds and can be used in all types of weather. SAR data is also useful for locating ships during storms for rescue missions. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputSentinel-1 C band SAR VV polarization band raster.OutputFeature class containing detected ships as polygons.Model architectureThis model uses the Faster R-CNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe model has an average precision score of 0.70 on our validation dataset.Training dataThe deep learning model was trained using the Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) which is prepared using Sentinel-1 imagery.Sample resultsHere are a few results from the model. To view more, see this story.