Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
time-independent
Ship detection plays an important role in port management, in terms of ship traffic, maritime rescue, cargo transportation and national defense. Satellite imagery provides data with high spatial and temporal resolution, which is useful for ship detection. SAR data has advantages over optical data, as microwaves are capable of penetrating clouds and can be used in all types of weather. SAR data is also useful for locating ships during storms for rescue missions. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputSentinel-1 C band SAR VV polarization band raster.OutputFeature class containing detected ships as polygons.Model architectureThis model uses the Faster R-CNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe model has an average precision score of 0.70 on our validation dataset.Training dataThe deep learning model was trained using the Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) which is prepared using Sentinel-1 imagery.Sample resultsHere are a few results from the model. To view more, see this story.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
SAR Ship Dataset is a dataset for object detection tasks - it contains Ship annotations for 39,584 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
This dataset labeled by SAR experts was created using 102 Chinese Gaofen-3 images and 108 Sentinel-1 images. It consists of 39,729 ship chips(remove some repeat clips) of 256 pixels in both range and azimuth. These ships mainly have distinct scales and backgrounds. It can be used to develop object detectors for multi-scale and small object detection.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
To cite the dataset please reference it as @INPROCEEDINGS{8124934, author={Li, Jianwei and Qu, Changwen and Shao, Jiaqi}, booktitle={2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA)}, title={Ship detection in SAR images based on an improved faster R-CNN}, year={2017}, volume={}, number={}, pages={1-6}, keywords={Marine vehicles;Feature extraction;Synthetic aperture radar;Proposals;Detectors;Image resolution;Deep learning;SAR;ship detection;Faster R-CNN}… See the full description on the dataset page: https://huggingface.co/datasets/agungpambudi/sar-ship-detection.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
As an outstanding method for ocean monitoring, Synthetic aperture radar (SAR) has received much attention from scholars in recent years. With the rapid advances in the field of SAR technology and image processing, significant progress has also been made in ship detection in SAR images. When dealing with large-scale ships on a wide sea surface, most existing algorithms can achieve great detection results. However, small ships in SAR images contain few feature information. It is difficult to detect them from the background clutter, and there is a problem of low detection rate and high false alarm. To improve the detection accuracy for small-scale ships, we propose an efficient ship detection model based on YOLOX, called YOLO-SD. First, Multi-Scale Convolution (MSC) is proposed to fuse feature information at different scales so as to resolve the problem of unbalanced semantic information in the lower layer and improve the ability of feature extraction. Further, the Feature Transformer Module (FTM) is designed to capture global features and link them to the context for the purpose of optimizing high-layer semantic information and ultimately achieving excellent detection performance. A large number of experiments on the HRSID and LS-SSDD-v1.0 show that YOLO-SD achieves a better detection performance than the baseline YOLOX. Compared with other excellent object detection models, YOLO-SD still has an edge in overall performance.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
SARFish is a Synthetic Aperture Radar (SAR) imagery dataset for the purpose of training, validating and testing supervised machine learning models on the tasks of ship detection, classification, and length regression. The SARFish dataset builds on the excellent work of the xView3-SAR dataset (2021) and consists of two parts:
Data - Extends the xView3-SAR dataset to include Single Look Complex (SLC) as well as Ground Range Detected (GRD) imagery data taken directly from the European Space… See the full description on the dataset page: https://huggingface.co/datasets/ConnorLuckettDSTG/SARFish.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Ship Detection in SAR with AI market size reached USD 1.32 billion in 2024, backed by rapid advancements in artificial intelligence and synthetic aperture radar (SAR) technologies. The sector is expected to register a robust CAGR of 15.4% from 2025 to 2033, propelling the market to a projected value of USD 4.27 billion by 2033. This growth is primarily driven by the increasing demand for enhanced maritime surveillance, security, and environmental monitoring across both governmental and commercial sectors worldwide.
The primary growth factor fueling the Ship Detection in SAR with AI market is the escalating need for real-time maritime situational awareness. With global trade heavily reliant on maritime transport and the persistent threat of illegal activities such as smuggling, piracy, and unregulated fishing, governments and private entities are investing in advanced SAR systems integrated with AI-based ship detection algorithms. These solutions provide unparalleled capabilities in detecting, classifying, and tracking vessels under all weather and lighting conditions, significantly improving operational efficiency and response times. Furthermore, the rapid proliferation of high-resolution SAR satellites and the integration of machine learning models have made it possible to automate the detection process, reducing human error and operational costs.
Another significant driver is the increasing adoption of AI-powered SAR solutions for environmental monitoring and disaster management. Oil spills, illegal dumping, and marine pollution have become pressing global issues, requiring constant surveillance of vast oceanic expanses. The ability of AI-enhanced SAR systems to detect anomalies, monitor ship routes, and identify environmental hazards in near real-time has positioned them as indispensable tools for environmental agencies and non-governmental organizations. Additionally, the integration of cloud-based analytics and big data processing capabilities enables stakeholders to analyze vast datasets efficiently, facilitating proactive decision-making and timely interventions in critical situations.
Technological advancements in SAR hardware and AI software are further catalyzing market growth. The miniaturization of SAR sensors, increased satellite launch frequency, and improved onboard processing power have collectively lowered the barriers to entry for smaller players and commercial end-users. Meanwhile, the evolution of deep learning and neural network architectures has significantly enhanced the accuracy and reliability of ship detection algorithms, even in challenging environments such as dense shipping lanes or adverse weather conditions. These innovations are not only expanding the application scope of ship detection in SAR with AI but are also fostering collaborations between technology providers, satellite operators, and end-users.
Regionally, Asia Pacific is emerging as the fastest-growing market, driven by the increasing maritime security concerns in the South China Sea, rapid expansion of commercial shipping activities, and substantial investments by governments in satellite infrastructure. North America and Europe continue to maintain a stronghold due to their established defense sectors and early adoption of cutting-edge surveillance technologies. Meanwhile, the Middle East & Africa and Latin America are witnessing growing interest, particularly for applications in port management and search and rescue operations. These regional trends underscore the global relevance and cross-sectoral impact of ship detection in SAR with AI technology.
The Component segment of the Ship Detection in SAR with AI market is categorized into Software, Hardware, and Services, each playing a pivotal role in the overall value chain. The software segment, which encompasses AI-based detection algorithms, data analytics platforms, and visualization tools, is witnessing the fastest growth. This surge is attributed to the increasing sophistication of AI models, which are now capable of processing massive SAR datasets with exceptional accuracy, distinguishing between different vessel types, and minimizing false positives. The integration of cloud-based software solutions further enhances scalability, enabling real-time access to processed imagery and analytics from any location, thereby broadening the user base acros
Simulated SAR images of the sea surface and ship wakes
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The current challenges in Synthetic Aperture Radar (SAR) ship detection tasks revolve around handling significant variations in target sizes and managing high computational expenses, which hinder practical deployment on satellite or mobile airborne platforms. In response to these challenges, this research presents YOLOv7-LDS, a lightweight yet highly accurate SAR ship detection model built upon the YOLOv7 framework. In the core of YOLOv7-LDS’s architecture, we introduce a streamlined feature extraction network that strikes a delicate balance between detection precision and computational efficiency. This network is founded on Shufflenetv2 and incorporates Squeeze-and-Excitation (SE) attention mechanisms as its key elements. Additionally, in the Neck section, we introduce the Weighted Efficient Aggregation Network (DCW-ELAN), a fundamental feature extraction module that leverages Coordinate Attention (CA) and Depthwise Convolution (DWConv). This module efficiently aggregates features while preserving the ability to identify small-scale variations, ensuring top-quality feature extraction. Furthermore, we introduce a lightweight Spatial Pyramid Dilated Convolution Cross-Stage Partial Channel (LSPHDCCSPC) module. LSPHDCCSPC is a condensed version of the Spatial Pyramid Pooling Cross-Stage Partial Channel (SPPCSPC) module, incorporating Dilated Convolution (DConv) as a central component for extracting multi-scale information. The experimental results show that YOLOv7-LDS achieves a remarkable Mean Average Precision (mAP) of 99.1% and 95.8% on the SAR Ship Detection Dataset (SSDD) and the NWPU VHR-10 dataset with a parameter count (Params) of 3.4 million, a Giga Floating Point Operations Per Second (GFLOPs) of 6.1 and an Inference Time (IT) of 4.8 milliseconds. YOLOv7-LDS effectively strikes a fine balance between computational cost and detection performance, surpassing many of the current state-of-the-art object detection models. As a result, it offers a more resilient solution for maritime ship monitoring.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic Aperture Radar (SAR), renowned for its all-weather monitoring capability and high-resolution imaging characteristics, plays a pivotal role in ocean resource exploration, environmental surveillance, and maritime security. It has become a fundamental technological support in marine science research and maritime management. However, existing SAR ship detection algorithms encounter two major challenges: limited detection accuracy and high computational cost, primarily due to the wide range of target scales, indistinct contour features, and complex background interference. To address these challenges, this paper proposes AC-YOLO, a novel lightweight SAR ship detection model based on YOLO11. Specifically, we design a lightweight cross-scale feature fusion module that adaptively fuses multi-scale feature information, enhancing small target detection while reducing model complexity. Additionally, we construct a hybrid attention enhancement module, integrating convolutional operations with a self-attention mechanism to improve feature discrimination without compromising computational efficiency. Furthermore, we propose an optimized bounding box regression loss function, the Minimum Point Distance Intersection over the Union (MPDIoU), which establishes multi-dimensional geometric metrics to accurately characterize discrepancies in overlap area, center distance, and scale variation between predicted and ground truth boxes. Experimental results demonstrate that, compared with the baseline YOLO11 model, AC-YOLO reduces parameter count by 30.0% and computational load by 15.6% on the SSDD dataset, with an average precision (AP) improvement of 1.2%; on the HRSID dataset, the AP increases by 1.5%. This model effectively reconciles the trade-off between complexity and detection accuracy, providing a feasible solution for deployment on edge computing platforms. The source code for the AC-YOLO model is available at: https://github.com/He-ship-sar/ACYOLO.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic Aperture Radar (SAR), renowned for its all-weather monitoring capability and high-resolution imaging characteristics, plays a pivotal role in ocean resource exploration, environmental surveillance, and maritime security. It has become a fundamental technological support in marine science research and maritime management. However, existing SAR ship detection algorithms encounter two major challenges: limited detection accuracy and high computational cost, primarily due to the wide range of target scales, indistinct contour features, and complex background interference. To address these challenges, this paper proposes AC-YOLO, a novel lightweight SAR ship detection model based on YOLO11. Specifically, we design a lightweight cross-scale feature fusion module that adaptively fuses multi-scale feature information, enhancing small target detection while reducing model complexity. Additionally, we construct a hybrid attention enhancement module, integrating convolutional operations with a self-attention mechanism to improve feature discrimination without compromising computational efficiency. Furthermore, we propose an optimized bounding box regression loss function, the Minimum Point Distance Intersection over the Union (MPDIoU), which establishes multi-dimensional geometric metrics to accurately characterize discrepancies in overlap area, center distance, and scale variation between predicted and ground truth boxes. Experimental results demonstrate that, compared with the baseline YOLO11 model, AC-YOLO reduces parameter count by 30.0% and computational load by 15.6% on the SSDD dataset, with an average precision (AP) improvement of 1.2%; on the HRSID dataset, the AP increases by 1.5%. This model effectively reconciles the trade-off between complexity and detection accuracy, providing a feasible solution for deployment on edge computing platforms. The source code for the AC-YOLO model is available at: https://github.com/He-ship-sar/ACYOLO.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Ship SAR Less is a dataset for object detection tasks - it contains Ship annotations for 1,958 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Sar Ship Dtetct is a dataset for object detection tasks - it contains Ship annotations for 7,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0
The dataset is derived from Sentinel-2 Level-2A (L2A) satellite images and focuses on the marine domain over Danish fjords. It provides a comprehensive collection of ship wakes and background clutter (referred to as "no_wake_crop") for remote sensing applications. The dataset has undergone post-processing through the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm with a clip limit value of 0.12 and a tile size of 16x16. The dataset comprises four spectral bands: B2, B3, B4, and B8.
Ship wake detection serves as a cornerstone in a multitude of domains that are critical to both human and environmental well-being:
Navigational Safety: Understanding ship wakes can provide insights into water currents and traffic patterns. This is vital for ensuring the safe passage of marine vessels, particularly in narrow straits and busy ports.
Environmental Monitoring: The study of ship wakes can reveal the influence of vessels on aquatic ecosystems. For instance, excessive wake turbulence can lead to coastal erosion and can disrupt marine habitats.
Maritime Surveillance: Wake detection plays a crucial role in maintaining maritime security. Tracking the wakes of vessels can help in identifying illegal activities such as smuggling or unauthorized fishing.
Traditionally, the process of ship wake detection has largely been a manual endeavor or employed simplistic statistical algorithms. Analysts would sift through satellite or aerial images to identify ship wakes, a process that is both time-consuming and prone to human error. Even automated statistical methods often lack the robustness needed to differentiate between true wakes and false positives, such as aquatic plants or natural water disturbances.
The introduction of explainable AI (xAI) techniques brings another layer of sophistication to wake analysis. While traditional machine learning models may offer high performance, they often act as "black boxes," making it difficult to understand how they arrive at a certain conclusion. In a critical domain like navigational safety or maritime surveillance, the ability to interpret and understand model decisions is indispensable. xAI methods can make these machine learning models more transparent, providing insights into their decision-making processes, which in turn can aid in fine-tuning or fully trusting the models.
The inclusion of four key spectral bands—B2, B3, B4, and B8—offers the scope for multi-spectral analysis. Different bands can capture varying features of water and wake textures, thereby offering a richer feature set for machine learning models. We use these spectral bands as referred to in [Liu, Yingfei, Jun Zhao, and Yan Qin. "A novel technique for ship wake detection from optical images." Remote Sensing of Environment 258 (2021): 112375.]
It is important to note the fundamental differences between wakes captured in Synthetic Aperture Radar (SAR) images and those in optical imagery. In SAR images, narrow-V wakes often arise due to Bragg scattering, a phenomenon that does not exist at optical wavelengths. In optical images, bright lines close to turbulent wakes are actually foams generated by the interaction between the surface horizontal flow of turbulent wakes and the surrounding background waves (Ermakov et al., 2014; Milgram et al., 1993; Peltzer et al., 1992). This can make the detection of wakes in optical images more challenging as there are usually no bright lines near turbulent wakes, and Kelvin arms may also show dark contrast. Methods that solely rely on searching for a trough and peak pair, taking the trough as the turbulent wake, would miss many actual wakes and could also result in the identification of false wakes.
The application of the CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm to this dataset allows for enhanced local contrast, enabling subtle features to become more pronounced. This significantly aids machine learning algorithms in feature extraction, thereby improving their ability to distinguish between complex patterns.
In addition to wakes, the dataset contains samples labeled as "No-Wake," which include environmental clutter and clouds. These samples are crucial for training robust models that can differentiate wakes from similar-looking natural phenomena.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic Aperture Radar (SAR), renowned for its all-weather monitoring capability and high-resolution imaging characteristics, plays a pivotal role in ocean resource exploration, environmental surveillance, and maritime security. It has become a fundamental technological support in marine science research and maritime management. However, existing SAR ship detection algorithms encounter two major challenges: limited detection accuracy and high computational cost, primarily due to the wide range of target scales, indistinct contour features, and complex background interference. To address these challenges, this paper proposes AC-YOLO, a novel lightweight SAR ship detection model based on YOLO11. Specifically, we design a lightweight cross-scale feature fusion module that adaptively fuses multi-scale feature information, enhancing small target detection while reducing model complexity. Additionally, we construct a hybrid attention enhancement module, integrating convolutional operations with a self-attention mechanism to improve feature discrimination without compromising computational efficiency. Furthermore, we propose an optimized bounding box regression loss function, the Minimum Point Distance Intersection over the Union (MPDIoU), which establishes multi-dimensional geometric metrics to accurately characterize discrepancies in overlap area, center distance, and scale variation between predicted and ground truth boxes. Experimental results demonstrate that, compared with the baseline YOLO11 model, AC-YOLO reduces parameter count by 30.0% and computational load by 15.6% on the SSDD dataset, with an average precision (AP) improvement of 1.2%; on the HRSID dataset, the AP increases by 1.5%. This model effectively reconciles the trade-off between complexity and detection accuracy, providing a feasible solution for deployment on edge computing platforms. The source code for the AC-YOLO model is available at: https://github.com/He-ship-sar/ACYOLO.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
SAR SSDD Dataset is a dataset for object detection tasks - it contains Objects annotations for 1,146 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Vessel detections with Sentinel-2 satellite imagery from 2019 to 14 days ago. Multiyear detection of vessels with respective length, speed and orientation estimates based on deep-learning models applied to electro-optical imagery (RGB and near-infrared) at 10-m resolution covering most exclusive economic zones and marine protected areas across the ocean.
Sentinel-2 detections are based on electro-optical imagery at 10-m resolution. Compared with our published Sentinel-1 detections based on SAR imagery at 20-m resolution, Sentinel-2 allows detecting smaller vessels with all types of vessel materials, in contrast with the weakness of SAR imagery on revealing vessels made of wood and fiberglass. Sentinel-2 also images the wakes of moving vessels, which further increase the detectability of small vessels and allow us to infer the vessels’ speed and orientation.The rich information contained in the optical imagery can be used to map not only vessel presence, but also vessel activities such as vessel encounters and bottom trawling. Sentinel-2 also covers more area of the ocean than Sentinel-1, and the novel neural-net detection approach does not require the exclusion of near-shore regions (as opposed to the CFAR approach), allowing detection of vessels in highly-packed areas all the way to the shoreline where most human activity is concentrated. Overall, with Sentinel-2 imagery we are able to detect about 3 times more vessels, “see” a broader range of vessel lengths and types, and infer more information of vessel activities than in our previous mapping using Sentinel-1 imagery.
Attribution-NoDerivs 3.0 (CC BY-ND 3.0)https://creativecommons.org/licenses/by-nd/3.0/
License information was derived automatically
Statistics illustrates the import volume of Ships, Vessels, Ferry-Boats for The Transport of Persons in Hong Kong SAR from 2007 to 2024 by trade partner.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The proposed AIS dataset encompasses a substantial temporal span of 20 months, spanning from April 2021 to December 2022. This extensive coverage period empowers analysts to examine long-term trends and variations in vessel activities. Moreover, it facilitates researchers in comprehending the potential influence of external factors, including weather patterns, seasonal variations, and economic conditions, on vessel traffic and behavior within the Finnish waters.
This dataset encompasses an extensive array of data pertaining to vessel movements and activities encompassing seas, rivers, and lakes. Anticipated to be comprehensive in nature, the dataset encompasses a diverse range of ship types, such as cargo ships, tankers, fishing vessels, passenger ships, and various other categories.
The AIS dataset exhibits a prominent attribute in the form of its exceptional granularity with a total of 2 293 129 345 data points. The provision of such granular information proves can help analysts to comprehend vessel dynamics and operations within the Finnish waters. It enables the identification of patterns and anomalies in vessel behavior and facilitates an assessment of the potential environmental implications associated with maritime activities.
Please cite the following publication when using the dataset:
TBD
The publication is available at: TBD
A preprint version of the publication is available at TBD
csv file structure
YYYY-MM-DD-location.csv
This file contains the received AIS position reports. The structure of the logged parameters is the following: [timestamp, timestampExternal, mmsi, lon, lat, sog, cog, navStat, rot, posAcc, raim, heading]
timestamp I beleive this is the UTC second when the report was generated by the electronic position system (EPFS) (0-59, or 60 if time stamp is not available, which should also be the default value, or 61 if positioning system is in manual input mode, or 62 if electronic position fixing system operates in estimated (dead reckoning) mode, or 63 if the positioning system is inoperative).
timestampExternal The timestamp associated with the MQTT message received from www.digitraffic.fi. It is assumed this timestamp is the Epoch time corresponding to when the AIS message was received by digitraffic.fi.
mmsi MMSI number, Maritime Mobile Service Identity (MMSI) is a unique 9 digit number that is assigned to a (Digital Selective Calling) DSC radio or an AIS unit. Check https://en.wikipedia.org/wiki/Maritime_Mobile_Service_Identity
lon Longitude, Longitude in 1/10 000 min (+/-180 deg, East = positive (as per 2's complement), West = negative (as per 2's complement). 181= (6791AC0h) = not available = default)
lat Latitude, Latitude in 1/10 000 min (+/-90 deg, North = positive (as per 2's complement), South = negative (as per 2's complement). 91deg (3412140h) = not available = default)
sog Speed over ground in 1/10 knot steps (0-102.2 knots) 1 023 = not available, 1 022 = 102.2 knots or higher
cog Course over ground in 1/10 = (0-3599). 3600 (E10h) = not available = default. 3 601-4 095 should not be used
navStat Navigational status, 0 = under way using engine, 1 = at anchor, 2 = not under command, 3 = restricted maneuverability, 4 = constrained by her draught, 5 = moored, 6 = aground, 7 = engaged in fishing, 8 = under way sailing, 9 = reserved for future amendment of navigational status for ships carrying DG, HS, or MP, or IMO hazard or pollutant category C, high speed craft (HSC), 10 = reserved for future amendment of navigational status for ships carrying dangerous goods (DG), harmful substances (HS) or marine pollutants (MP), or IMO hazard or pollutant category A, wing in ground (WIG); 11 = power-driven vessel towing astern (regional use); 12 = power-driven vessel pushing ahead or towing alongside (regional use); 13 = reserved for future use, 14 = AIS-SART (active), MOB-AIS, EPIRB-AIS 15 = undefined = default (also used by AIS-SART, MOB-AIS and EPIRB-AIS under test)
rot ROTAIS Rate of turn
0 to +126 = turning right at up to 708 deg per min or higher
0 to -126 = turning left at up to 708 deg per min or higher
Values between 0 and 708 deg per min coded by ROTAIS = 4.733 SQRT(ROTsensor) degrees per min where ROTsensor is the Rate of Turn as input by an external Rate of Turn Indicator (TI). ROTAIS is rounded to the nearest integer value.
+127 = turning right at more than 5 deg per 30 s (No TI available)
-127 = turning left at more than 5 deg per 30 s (No TI available)
-128 (80 hex) indicates no turn information available (default).
ROT data should not be derived from COG information.
posAcc Position accuracy, The position accuracy (PA) flag should be determined in accordance with the table below:
1 = high (<= 10 m)
0 = low (> 10 m)
0 = default
See https://www.navcen.uscg.gov/?pageName=AISMessagesA#RAIM
raim RAIM-flag Receiver autonomous integrity monitoring (RAIM) flag of electronic position fixing device; 0 = RAIM not in use = default; 1 = RAIM in use. See Table https://www.navcen.uscg.gov/?pageName=AISMessagesA#RAIM
Check https://en.wikipedia.org/wiki/Receiver_autonomous_integrity_monitoring
heading True heading, Degrees (0-359) (511 indicates not available = default)
YYYY-MM-DD-metadata.csv
This file contains the received AIS metadata: the ship static and voyage related data. The structure of the logged parameters is the following: [timestamp, destination, mmsi, callSign, imo, shipType, draught, eta, posType, pointA, pointB, pointC, pointD, name]
timestamp The timestamp associated with the MQTT message received from www.digitraffic.fi. It is assumed this timestamp is the Epoch time corresponding to when the AIS message was received by digitraffic.fi.
destination Maximum 20 characters using 6-bit ASCII; @@@@@@@@@@@@@@@@@@@@ = not available For SAR aircraft, the use of this field may be decided by the responsible administration
mmsi MMSI number, Maritime Mobile Service Identity (MMSI) is a unique 9 digit number that is assigned to a (Digital Selective Calling) DSC radio or an AIS unit. Check https://en.wikipedia.org/wiki/Maritime_Mobile_Service_Identity
callSign 7?=?6 bit ASCII characters, @@@@@@@ = not available = default Craft associated with a parent vessel, should use “A” followed by the last 6 digits of the MMSI of the parent vessel. Examples of these craft include towed vessels, rescue boats, tenders, lifeboats and liferafts.
imo 0 = not available = default – Not applicable to SAR aircraft
0000000001-0000999999 not used
0001000000-0009999999 = valid IMO number;
0010000000-1073741823 = official flag state number.
Check: https://en.wikipedia.org/wiki/IMO_number
shipType
0 = not available or no ship = default
1-99 = as defined below
100-199 = reserved, for regional use
200-255 = reserved, for future use Not applicable to SAR aircraft
Check https://www.navcen.uscg.gov/pdf/AIS/AISGuide.pdf and https://www.navcen.uscg.gov/?pageName=AISMessagesAStatic
draught In 1/10 m, 255 = draught 25.5 m or greater, 0 = not available = default; in accordance with IMO Resolution A.851 Not applicable to SAR aircraft, should be set to 0
eta Estimated time of arrival; MMDDHHMM UTC
Bits 19-16: month; 1-12; 0 = not available = default
Bits 15-11: day; 1-31; 0 = not available = default
Bits 10-6: hour; 0-23; 24 = not available = default
Bits 5-0: minute; 0-59; 60 = not available = default
For SAR aircraft, the use of this field may be decided by the responsible administration
posType Type of electronic position fixing device
0 = undefined (default)
1 = GPS
2 = GLONASS
3 = combined GPS/GLONASS
4 = Loran-C
5 = Chayka
6 = integrated navigation system
7 = surveyed
8 = Galileo,
9-14 = not used
15 = internal GNSS
pointA Reference point for reported position.
Also indicates the dimension of ship (m). For SAR aircraft, the use of this field may be decided by the responsible administration. If used it should indicate the maximum dimensions of the craft. As default should A = B = C = D be set to “0”
Check: https://www.navcen.uscg.gov/?pageName=AISMessagesAStatic#_Reference_point_for
pointB See above
pointC See above
pointD See above
name Maximum 20 characters 6 bit ASCII "@@@@@@@@@@@@@@@@@@@@" = not available = default The Name should be as shown on the station radio license. For SAR aircraft, it should be set to “SAR AIRCRAFT NNNNNNN” where NNNNNNN equals the aircraft registration number.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
time-independent