Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
General Description
A manually annotated dataset, consisting of the video frames and segmentation masks, for segmentation of forest fire burned area based on a video captured by a UAV. A detailed explanation of the dataset generation is available in the open-access article "Burned area semantic segmentation: A novel dataset and evaluation using convolutional networks".
Data Collection
The BurnedAreaUAV dataset derives from a video captured at the coordinates' latitude 41° 23' 37.56" and longitude -7° 37' 0.32", at Torre do Pinhão, in northern Portugal in an area characterized by shrubby to herbaceous vegetation. The video was captured during the evolution of a prescribed fire using a DJI Phantom 4 PRO UAV equipped with an FC6310S RGB camera.
Video Overview
The video captures a prescribed fire where the burned area increases progressively. At the beginning of the sequence, a significant portion of the UAV's sensor field of view is already burned, and the burned area expands as time goes by. The video was collected by an RGB sensor installed on a drone while keeping the drone in a nearly static stationary stance during the data collection duration.
The video has about 15 minutes and a frame rate of 25 frames per second, amounting to 22500 images. It was collected by an RGB sensor installed on a drone while keeping the drone in a nearly static stationary stance during the data collection duration. Throughout this period, the progression of the burned area is observed. The original video has a resolution of 720×1280 and is stored in H.264 (or MPEG-4 Part 10) format. No audio signal was collected.
Manual Annotation
The annotation was done every 100 frames, which corresponds to a sampling period of 4 seconds. Two classes are considered: burned_area and unburned_area. This annotation has been done for the entire length of the video. The training set consists of 226 frame-image pairs and the test set of 23. The training and test annotations are offset by 50 frames.
We plan to expand this dataset in the future.
File Organization (BurnedAreaUAV_v1.rar)
The data is available in PNG, JSON (Labelme format), and WKT (segmentation masks only). The raw video data is also made available.
Concomitantly, photos were taken that allow to obtain metadata about the position of the drone, including height and coordinates, the orientation of the drone and the camera, and others. The geographic data regarding the location of the controlled fire are represented in a KML file that Google Earth and other geospatial software can read. We also provide two high-resolution orthophotos of the area of interest before and after burning.
The data produced by the segmentation models developed in "Burned area semantic segmentation: A novel dataset and evaluation using convolutional networks", comprising outputs in PNG and WKT formats, is also readily available upon request
BurnedAreaUAV_dataset_v1.rar
MP4_video (folder)
-- original_prescribed_burn_video.mp4
PNG (folder)
train (folder)
frames (folder)
-- frame_000000.png (raster image)
-- frame_000100.png
-- frame_000200.png
…
msks (folder)
-- mask_000000.png
-- mask_000100.png
-- mask_000200.png
…
test (folder)
frames (folder)
-- frame_020250.png
-- frame_020350.png
-- frame_020350.png
…
msks (folder)
-- mask_020250.png
-- mask_020350.png
-- mask_020350.png
…
JSON (folder)
-- train_valid_json (folder)
-- frame_000000.json (Labelme format)
-- frame_000100.json
-- frame_000200.json
-- frame_000300.json
…
-- test_json (folder)
-- frame_020250.json
-- frame_020350.json
-- frame_020450.json
…
WKT_files (folder)
-- train_valid.wkt (list of masks polygons)
-- test.wkt
UAV photos (metadata)
-- uav_photo1_metadata.JPG
-- uav_photo2_metadata.JPG
High resolution ortophoto files
-- odm_orthophoto_afterBurning.png
-- odm_orthophoto_beforeBurning.png
Keyhole Markup Language file (area under study polygon)
-- pinhao_cell_precribed_area.kml
Acknowledgements
This dataset results from activities developed in the context of partially projects funded by FCT - Fundação para a Ciência e a Tecnologia, I.P., through projects MIT-EXPL/ACC/0057/2021 and UIDB/04524/2020, and under the Scientific Employment Stimulus - Institutional Call - CEECINST/00051/2018.
The source code is available here.
http://dronedataset.icg.tugraz.at/http://dronedataset.icg.tugraz.at/
The primary goal of the Semantic Drone Dataset is to enhance the safety of autonomous drone flight and landing procedures through improved semantic comprehension of urban environments. This dataset comprises imagery captured from a bird's-eye (nadir) perspective, showcasing over 20 houses, taken at altitudes ranging from 5 to 30 meters above the ground. The images are acquired using a high-resolution camera with a size of 6000x4000 pixels (24 megapixels). The training set includes 400 publicly accessible images, while the test set consists of 200 private images. Additionally, the dataset provides bounding box annotations for person detection within both the training and test sets.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The Semantic Drone Dataset focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird's eye) view acquired at an altitude of 5 to 30 meters above the ground. A high-resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains 400 publicly available images and the test set is made up of 200 private images.
This dataset is taken from https://www.kaggle.com/awsaf49/semantic-drone-dataset. We remove and add files and information that we needed for our research purpose. We create our tiff files with a resolution of 1200x800 pixel in 24 channel with each channel represent classes that have been preprocessed from png files label. We reduce the resolution and compress the tif files with tiffile python library.
If you have any problem with tif dataset that we have been modified you can contact nunenuh@gmail.com and gaungalif@gmail.com.
This dataset was a copy from the original dataset (link below), we provide and add some improvement in the semantic data and classes. There are the availability of semantic data in png and tiff format with a smaller size as needed.
The images are labelled densely using polygons and contain the following 24 classes:
unlabeled paved-area dirt grass gravel water rocks pool vegetation roof wall window door fence fence-pole person dog car bicycle tree bald-tree ar-marker obstacle conflicting
> images
> labels/png
> labels/tiff
- class_to_idx.json
- classes.csv
- classes.json
- idx_to_class.json
aerial@icg.tugraz.at
If you use this dataset in your research, please cite the following URL: www.dronedataset.icg.tugraz.at
The Drone Dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (Graz University of Technology) do not accept any responsibility for errors or omissions. That you include a reference to the Semantic Drone Dataset in any work that makes use of the dataset. For research papers or other media link to the Semantic Drone Dataset webpage.
That you do not distribute this dataset or modified versions. It is permissible to distribute derivative works in as far as they are abstract representations of this dataset (such as models trained on it or additional annotations that do not directly include any of our data) and do not allow to recover the dataset or something similar in character. That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain. That all rights not expressly granted to you are reserved by us (Graz University of Technology).
Raw-Microscopy:
940 raw bright-field microscopy images of human blood smear slides for leukocyte classification (microscopy/images/raw_scale100) with corresponding labels (microscopy/labels). 5,640 variations measured at six additional different intensities (microscopy/images/raw_scale001-raw_scale0075) 11,280 images of the raw sensor data processed through twelve different pipelines (microscopy/images/processed_views) Raw-Drone:
548 raw drone camera images for car segmentation (drone/images_tiles_256/raw_scale100) with corresponding binary segmentation mask (drone/masks_tiles_256). The images and the masks are cropped from 12 raw drone camera images (drone/images_full/raw_scale100) and 12 masks (drone/masks_full) of size 3648 by 5472. 3,288 variations measured at six additional different intensities (drone/images_tiles_256/raw_scale001-raw_scale075). 6,576 images of the raw sensor data processed through twelve different pipelines (drone/images_tiles_256/processed_views). Detailed datasheets for the two datasets can be found in the appendices of https://arxiv.org/abs/2211.02578.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The market for oblique cameras for UAVs is experiencing robust growth, driven by increasing demand across diverse sectors. Applications such as close-range photogrammetry, digital China construction projects, cultural relic preservation, and bridge inspections are significantly fueling market expansion. The preference for higher-resolution imagery and the need for detailed 3D models are key factors contributing to this growth. While the precise market size for 2025 is unavailable, considering a conservative CAGR of 15% (a common growth rate for specialized tech markets) and an estimated 2024 market size of $250 million, the 2025 market size could be around $287.5 million. This growth is further bolstered by advancements in camera technology, particularly in full-frame and medium-frame sensors offering superior image quality and data capture capabilities. Segmentation by camera type (half-frame, medium-frame, full-frame) highlights a clear trend towards higher-end systems, reflecting the increasing need for detailed and accurate data in professional applications. Geographic distribution shows a strong presence in North America and Europe, driven by early adoption and well-established infrastructure in these regions. However, rapid development in Asia Pacific, particularly in China and India, suggests significant future growth potential. The market faces restraints including the high initial investment cost of UAV systems and specialized cameras, as well as regulatory hurdles in certain regions concerning UAV operations. Nevertheless, the ongoing technological advancements, coupled with the increasing need for high-quality aerial imagery across various industries, are expected to overcome these challenges, ensuring sustained market expansion throughout the forecast period (2025-2033). The leading companies are actively engaged in R&D, aiming for improved image processing, integration with advanced software, and enhanced user-friendliness to cater to diverse applications and user expertise levels.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Dataset is available under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
=======================Summary=======================This dataset contains data showing a group of one or more drones (drone swarm) roaming around closed 3D space.The purpose of this dataset is to train and validate the position tracking of drones on images. This can be done using 3D coordinates of each drone or using segmentation masks for machine learning related approach.The first part of the dataset is split across ten sequences (AirSim 1-10) which were generated using Unreal Engine with Microsoft AirSim plugin. Each sequence is slightly modified to introduce more diversity in the dataset.The second part of the dataset (HML 1-3) is based on recordings performed in the Human Motion Laboratory located inside the Research and Development Center of the Polish-Japanese Academy of Information Technology.The simulation sequences used a drone model DJI Mavic 2 Pro by MattMaksymowicz, CC Attribution, https://sketchfab.com/3d-models/dji-mavic-2-pro-3e5b8566dbe24f4ba65473179650abd1.
All documents and papers that use the dataset must acknowledge the use of the dataset by including a citation of the following paper: Lindenheim-Locher W, Świtoński A, Krzeszowski T, Paleta G, Hasiec P, Josiński H, Paszkuta M, Wojciechowski K, Rosner J. YOLOv5 Drone Detection Using Multimodal Data Registered by the Vicon System. Sensors. 2023; 23(14):6396. https://doi.org/10.3390/s23146396
=======================Dataset sequences=======================AirSim 1:4 drones based on DJI Mavic 2 Pro.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 2:10 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.Length: 10 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 3:10 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.Length: 10 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 4:8 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 5:6 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 6:8 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 7:8 drones based on three types of custom model.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 8:8 drones based on three types of custom model.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 9:8 drones based on three types of custom model.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
AirSim 10:2 drones based on a custom model.Length: 20 secondsFramerate: 25 FPSNumber of cameras: 8
HML 1: 1 real drone (DJI Mavic 2). It is controlled in real-time.Length: 78 secondsFramerate: 25 FPSNumber of cameras: 4
HML 2: 1 real drone. It is controlled in real-time.Length: 60 secondsFramerate: 25 FPSNumber of cameras: 4
HML 3: 1 real drone. It is controlled in real-time.Length: 90 secondsFramerate: 25 FPSNumber of cameras: 4
=======================Additional information=======================
AirSim 1-10 sets consist of two types of image data: RGB images and masks
RGB images are compressed to avi videos to save space. The name of the AVI filename contains the camera name.The name of the mask directory is based on the drone whose masks it containsNaming pattern of mask files:- a camera that was used to capture- frame IDfor example, cam_1_230.jpeg was taken by camera 1 and is 230 frames of the sequence.
HML 1-3 sets consist of two additional files:- c3d motion data file- xcp calibration file
=======================Further information=======================For any questions, comments or other issues please contact Tomasz Krzeszowski .
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The HD drone camera market is experiencing robust growth, driven by increasing demand across diverse sectors. While precise figures for market size and CAGR are unavailable in the provided data, industry analysis suggests a significant expansion. Considering the presence of major players like Sony, Canon, and DJI, along with numerous specialized companies, the market size in 2025 is estimated to be around $2.5 billion. This is based on observed growth in related sectors like commercial drone usage and the continuous improvement in camera technology. The Compound Annual Growth Rate (CAGR) is projected to be approximately 15% from 2025 to 2033, indicating a substantial increase in market value by the end of the forecast period. This growth is fueled by advancements in image sensor technology, leading to higher resolution and improved low-light performance, making HD drone cameras indispensable for various applications. Key drivers include the rising adoption of drones for aerial photography, videography, surveillance, and mapping in industries like agriculture, construction, and infrastructure inspection. Trends toward miniaturization, enhanced image stabilization, and the integration of AI-powered features, such as object recognition and autonomous flight, are further bolstering market expansion. Despite these positive factors, challenges like stringent regulations on drone usage in certain regions, high initial investment costs for advanced models, and concerns about data privacy and security act as potential restraints on overall market growth. The segmentation of the market likely includes factors such as camera resolution, drone type (e.g., multirotor, fixed-wing), and application area, providing specific opportunities within the larger landscape.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Wildfire Monitoring and Response: The Fire and Smoke Segmentation model can be used to detect and monitor wildfires through aerial or satellite imagery. This data can provide real-time insights into the progress of fires, help responders allocate resources more effectively, and identify high-risk areas for evacuation.
Emergency Response in Urban Areas: The model can assist in analyzing images from surveillance cameras, drone footage, or social media uploads and pinpoint exact locations of fires and smoke in cities. This information can help emergency services assess the severity of the situation, prioritize response, and coordinate efforts more effectively.
Industrial Accident Detection and Prevention: By monitoring facilities such as power plants, refineries, or factories, the Fire and Smoke Segmentation model can detect potential fire hazards or ongoing incidents. Automated alerts can be used to trigger emergency protocols and mitigate damages.
Fire and Smoke Damage Assessment: Post-incident analysis using this model can help insurance companies, government agencies, and property owners assess damage to structures and estimate losses. This data can be useful for claims processing, allocating financial aid, and planning reconstruction efforts.
Smoke Inhalation Risk Mapping: By identifying areas with high levels of smoke during fire incidents, the model can contribute to the creation of risk maps that inform people about areas to avoid for safety reasons. These smoke risk maps can be especially critical for individuals with respiratory conditions or compromised immune systems.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
New Version 2: It is the largest high quality (min size of 400x400) dataset as far as we know (01/2023).
The dataset called RIWA represents a pixel-wise binary river water segmentation. It consist of manually labelled smartphone, drone and DSLR images of rivers as well as suiting images of the Water Segmentation Dataset and high quality AED20K images. The COCO dataset was withdrawn since the segmentation quality is extremely poor.
Version 2: (declared as Version 4 by kaggle) - contains 1142 training, 167 validation and 323 test images. - Min size: 400 x 400 (h x w) - High quality segmentations. If you find an error, please message us.
Version 1: - contains 789 training, 228 validation and 111 test images. - Min size: 174 x 200 (hxw) - Some segmentations are not perfect.
If you use this dataset, please cite as:
@misc{RIWA_Dataset,
title={River Water Segmentation Dataset (RIWA)},
url={https://www.kaggle.com/dsv/4901781},
DOI={10.34740/KAGGLE/DSV/4901781},
publisher={Kaggle},
author={Xabier Blanch and Franz Wagner and Anette Eltner},
year={2023}
}
Contact: - Xabier Blanch, TU Dresden see at SCIENTIFIC STAFF - Franz Wagner, TU Dresden - Anette Eltner, TU Dresden
In 2023, we carried out a comparison to find the best CNN on this domain. If you are interested, please see our paper: River water segmentation in surveillance camera images: A comparative study of offline and online augmentation using 32 CNNs.
We conducted the tests using the AiSeg GitLab repository. It is capable of interactively train 2D and 3D CNNs, augmenting data with offline and online augmentation, analyzing single networks, comparing multiple networks, and applying trained CNNs to new data. The RIWA dataset can be used directly.
The handling of natural disasters, especially heavy rainfall and corresponding floods, requires special demands on emergency services. The need to obtain a quick, efficient and real-time estimation of the water level is critical for monitoring a flood event. This is a challenging task and usually requires specially prepared river sections. In addition, in heavy flood events, some classical observation methods may be compromised.
With the technological advances derived from image-based observation methods and segmentation algorithms based on neural networks (NN), it is possible to generate real-time, low-cost monitoring systems. This new approach makes it possible to densify the observation network, improving flood warning and management. In addition, images can be obtained by remotely positioned cameras, preventing data loss during a major event.
The workflow we have developed for real-time monitoring consists of the integration of 3 different techniques. The first step consists of a topographic survey using Structure from Motion (SfM) strategies. In this stage, images of the area of interest are obtained using both terrestrial cameras and UAV images. The survey is completed by obtaining ground control point coordinates with multi-band GNSS equipment. The result is a 3D SfM model georeferenced to centimetre accuracy that allows us to reconstruct not only the river environment but also the riverbed.
The second step consists of segmenting the images obtained with a surveillance camera installed ad hoc to monitor the river. This segmentation is achieved with the use of convolutional neural networks (CNN). The aim is to automatically segment the time-lapse images obtained every 15 minutes. We have carried out this research by testing different CNN to choose the most suitable structure for river segmentation, adapted to each study area and at each time of the day (day and night).
The third step is based on the integration between the automatically segmented images and the 3D model acquired. The CNN-segmented river boundary is projected into the 3D SfM model to obtain a metric result of the water level based on the point of the 3D model closest to the image ray.
The possibility of automating the segmentation and reprojection in the 3D model will allow the generation of a robust centimetre-accurate workflow, capable of estimating the water level in near real time both day and night. This strategy represents the basis for a better understanding of river flo...
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global selfie drone market is experiencing robust growth, driven by increasing smartphone penetration, the desire for unique content creation, and advancements in drone technology making them more accessible and user-friendly. While precise market sizing data is unavailable, considering the presence of major players like DJI, Xiaomi, and others, and the rapid adoption of consumer drone technology, a reasonable estimation places the 2025 market size at approximately $500 million. This reflects a significant increase from previous years and projects a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033. This growth is fueled by several key trends, including the integration of advanced features like 4K video recording, improved image stabilization, and intelligent flight modes specifically tailored for capturing selfies and group shots. The miniaturization of drone technology also contributes to this rise in popularity, making selfie drones highly portable and convenient for consumers. However, certain restraints hinder market expansion. These include concerns regarding safety regulations, privacy issues surrounding aerial photography, and the relatively high initial cost of purchase for some models. Despite these challenges, the market’s positive trajectory suggests continued strong growth. This is further supported by the emergence of innovative features like obstacle avoidance systems and improved battery life, along with the increasing affordability of entry-level selfie drones. The market segmentation involves a range of drone sizes, features (such as camera quality and GPS capabilities), and price points catering to varying consumer needs and budgets. The competitive landscape, with established players like DJI and emerging competitors, is intensifying, further driving innovation and price competitiveness within the sector.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The pocket drone market is experiencing robust growth, driven by increasing consumer demand for compact, portable aerial photography and videography devices. Technological advancements, such as improved camera quality, longer flight times, and enhanced stabilization features in smaller form factors, are key catalysts. The market's ease of use and affordability are also contributing to its expansion, attracting both amateur and professional users. While precise market size figures for 2025 are unavailable, based on general market trends and considering the growth trajectory of similar consumer electronics, a reasonable estimation for the 2025 market size would be $500 million. Assuming a conservative Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033, the market is projected to reach approximately $2 billion by 2033. This growth is anticipated despite potential restraints such as regulatory hurdles related to drone usage and concerns surrounding privacy and data security. The competitive landscape is dynamic, with established players like DJI and GoPro competing alongside smaller, innovative companies such as Zerotech and Yuneec. Companies are focusing on strategic partnerships, product diversification, and technological innovation to maintain market share and capture emerging opportunities. Market segmentation will likely continue to evolve, with a focus on features like advanced camera capabilities, flight range, and integration with mobile devices. Future growth will be largely influenced by the adoption rate in emerging markets, the development of advanced functionalities such as autonomous flight modes, and the introduction of new, more sophisticated pocket drone models. Continued improvements in battery technology will also play a crucial role in extending flight times and enhancing the overall user experience.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global auto-follow drone market, valued at $658 million in 2025, is projected to experience robust growth, exhibiting a compound annual growth rate (CAGR) of 3.9% from 2025 to 2033. This growth is fueled by several key factors. The increasing popularity of aerial photography and videography, particularly among amateur and professional content creators, is a significant driver. Consumers and businesses alike are drawn to the ease of use and creative possibilities offered by drones capable of autonomous subject tracking. Technological advancements, such as improved image stabilization, longer battery life, and more sophisticated obstacle avoidance systems, are further enhancing the appeal and functionality of these devices. The rising adoption of drones in various sectors, including real estate, agriculture, and surveillance, is also contributing to market expansion. While initial high prices might present a barrier to entry for some consumers, the ongoing trend of decreasing production costs and the emergence of more affordable models are expected to broaden market accessibility. Competition within the auto-follow drone market is intense, with established players like DJI, Yuneec, and Parrot competing alongside newer entrants like Autel and Skydio. The market is witnessing innovation in features such as enhanced object recognition capabilities, improved GPS accuracy for precise tracking, and the integration of smart features. The market segmentation is likely to expand, with variations appearing based on factors such as drone size, camera quality, battery life, and target user group (e.g., professionals versus hobbyists). Future growth will hinge on the continuous development of user-friendly interfaces, improved safety features, and regulatory clarity surrounding drone usage in various geographical areas. The market's success depends on addressing concerns around data privacy and ensuring responsible drone operation.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The FPV (First-Person View) drone market is experiencing robust growth, projected to reach a market size of $558 million in 2025, expanding at a compound annual growth rate (CAGR) of 13.7%. This surge is driven by several key factors. Firstly, the increasing affordability and accessibility of high-quality FPV drones are attracting a broader range of users, from professional filmmakers and photographers leveraging their maneuverability for unique shots, to amateur enthusiasts enjoying the thrill of immersive flight experiences. Technological advancements, such as improved camera stabilization, longer flight times, and enhanced video transmission systems, further fuel market expansion. The rise of FPV drone racing as a popular sport and the growth of online communities sharing content and tutorials also contribute significantly to market demand. Furthermore, the segment encompassing foldable drones is witnessing particularly strong growth due to portability and ease of transportation. The market is geographically diverse, with North America and Asia Pacific expected to be leading regions, driven by strong consumer demand and established technological infrastructure. However, regulatory hurdles related to drone operation and safety concerns remain potential restraints, requiring proactive solutions from both manufacturers and governing bodies. The market segmentation reveals a dynamic landscape. The professional use segment, encompassing applications in cinematography, inspection, and surveying, fuels high-value sales. The amateur segment shows rapid growth owing to its affordability and ease of use, supported by numerous consumer-friendly models. The distinction between folded and unfolded drones is crucial, with foldable drones experiencing higher demand due to their convenient portability and storage. Key players such as DJI, Hubsan, and iFlight are shaping the competitive landscape through innovation, expanding product lines, and aggressive marketing strategies. The forecast period of 2025-2033 suggests continued expansion, with anticipated growth driven by new technological advancements and sustained interest from both professional and amateur users. The robust growth in the FPV drone market positions it for significant expansion over the next decade.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is Part 2/2 of the ActiveHuman dataset! Part 1 can be found here.
Dataset Description
ActiveHuman was generated using Unity's Perception package.
It consists of 175428 RGB images and their semantic segmentation counterparts taken at different environments, lighting conditions, camera distances and angles. In total, the dataset contains images for 8 environments, 33 humans, 4 lighting conditions, 7 camera distances (1m-4m) and 36 camera angles (0-360 at 10-degree intervals).
The dataset does not include images at every single combination of available camera distances and angles, since for some values the camera would collide with another object or go outside the confines of an environment. As a result, some combinations of camera distances and angles do not exist in the dataset.
Alongside each image, 2D Bounding Box, 3D Bounding Box and Keypoint ground truth annotations are also generated via the use of Labelers and are stored as a JSON-based dataset. These Labelers are scripts that are responsible for capturing ground truth annotations for each captured image or frame. Keypoint annotations follow the COCO format defined by the COCO keypoint annotation template offered in the perception package.
Folder configuration
The dataset consists of 3 folders:
Essential Terminology
Dataset Data
The dataset includes 4 types of JSON annotation files files:
Most Labelers generate different annotation specifications in the spec key-value pair:
Each Labeler generates different annotation specifications in the values key-value pair:
https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
The aerial mapping camera system market is experiencing robust growth, driven by increasing demand across various sectors. Advancements in sensor technology, particularly in high-resolution imagery and improved processing capabilities, are fueling market expansion. The integration of these systems with UAVs (Unmanned Aerial Vehicles) for cost-effective and efficient data acquisition is a significant trend. Furthermore, the rising adoption of GIS (Geographic Information Systems) and 3D modeling applications across construction, agriculture, and environmental monitoring further boosts market demand. While challenges like high initial investment costs and regulatory hurdles exist, the overall market outlook remains positive. Let's assume, for illustrative purposes, a 2025 market size of $500 million with a CAGR of 12% for the forecast period (2025-2033). This implies a substantial market expansion, reaching an estimated $1.6 billion by 2033. The market is segmented by scanner type (Linear Array, Area Array) and application (manned and unmanned aircraft). The linear array scanners currently hold a larger market share due to their established technology and widespread adoption, though area array systems are expected to gain traction due to their higher speed and efficiency, particularly in applications involving larger areas. The unmanned aircraft segment demonstrates the fastest growth rate, driven by cost efficiency and accessibility of drone technology. Key players like Vexcel Imaging, Leica Geosystems, and Teledyne Optech are strategically investing in R&D and acquisitions to strengthen their market positions. Geographic regions like North America and Europe currently dominate the market, with Asia-Pacific projected to experience the fastest growth due to increasing infrastructure development and urbanization. This report provides an in-depth analysis of the global aerial mapping camera system market, valued at approximately $2.5 billion in 2023, projecting a Compound Annual Growth Rate (CAGR) of 7% to reach $3.8 billion by 2028. It covers market segmentation, key trends, leading players, and future growth opportunities. This report is essential for businesses involved in surveying, mapping, agriculture, construction, and infrastructure development, as well as investors seeking opportunities in this rapidly evolving technology sector.
https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
The global renewable drone market is projected to grow from $674.52 million in 2025 to $10,124.06 million by 2033, at a CAGR of 38.94% during the forecast period. The rising adoption of drones for various commercial applications, such as aerial surveying and mapping, delivery and logistics, inspection and monitoring, agriculture, and security and surveillance, is driving the growth of the market. The increasing need for efficient and environmentally friendly solutions for these applications is also contributing to the demand for renewable drones. The market is segmented based on type, payload capacity, range, application, propulsion system, and region. The fixed-wing drone segment accounted for the largest share in 2025. The increasing adoption of fixed-wing drones for long-range surveillance, mapping, and delivery is the primary factor driving the growth of the segment. The multi-rotor drone segment is expected to witness significant growth during the forecast period, due to increasing demand for these drones for close-range aerial inspections, photography, and videography. The hybrid drone segment will see the fastest growth, as these drones offer the advantages of both fixed-wing and multi-rotor drones. Recent developments include: The global renewable drone market is projected to reach $107.9 billion by 2032, growing at a CAGR of 38.94% from 2024 to 2032. The market growth is attributed to the increasing adoption of renewable energy sources, such as solar and wind power, and the need for efficient and cost-effective inspection and maintenance of renewable energy infrastructure. Key industry participants include DJI, Autel Robotics, Parrot, and Skydio. Recent developments in the market include the launch of new drone models with improved time, range, and camera capabilities, as well as the development of software and AI technologies for autonomous drone operation.. Key drivers for this market are: Clean energy initiatives Inspection and monitoring needs Precision agriculture Aerial data capture Search and rescue operations. Potential restraints include: Technological advancements Government regulations Growing demand for aerial inspection Advancements in battery technology Increasing use in disaster response.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global drone-based orchard canopy mapping market size reached USD 1.34 billion in 2024, reflecting robust adoption across key agricultural regions. The market is projected to expand at a CAGR of 15.2% from 2025 to 2033, reaching an estimated USD 4.23 billion by the end of the forecast period. This impressive growth trajectory is primarily driven by the increasing need for precision agriculture solutions, which optimize orchard management and crop yield while minimizing resource use and environmental impact.
The surge in demand for drone-based orchard canopy mapping is largely fueled by the agricultural sector’s transition toward digital and data-driven farming practices. As orchards face mounting pressures related to climate variability, labor shortages, and the need for sustainable resource management, drone technology has emerged as a vital tool for real-time data collection and analysis. The integration of advanced imaging sensors, such as multispectral and LiDAR, enables growers to assess canopy health, density, and coverage with unprecedented accuracy. These insights support timely interventions, improved crop planning, and ultimately, enhanced productivity and profitability for orchard operators.
Another significant growth factor is the rapid advancement in drone hardware and software capabilities. Modern drones are now equipped with high-resolution cameras, sophisticated navigation systems, and artificial intelligence-driven analytics platforms that automate complex image processing tasks. These technological improvements have made drone-based canopy mapping more accessible and cost-effective, even for small and medium-sized orchards. Furthermore, the proliferation of cloud-based data platforms allows for seamless sharing and collaborative analysis of mapping results, fostering wider adoption across the agricultural value chain.
Government initiatives and supportive regulatory frameworks are also playing a pivotal role in accelerating market growth. Many countries are investing in smart agriculture programs, offering subsidies and technical assistance to promote the use of drones for crop monitoring and resource management. Additionally, collaborations between research institutes, drone manufacturers, and agricultural agencies are driving innovation in mapping methodologies and expanding the range of actionable insights that can be derived from canopy data. These collective efforts are expected to further stimulate demand for drone-based orchard canopy mapping solutions in the coming years.
From a regional perspective, North America currently dominates the market, accounting for over 37% of global revenue in 2024, followed closely by Europe and Asia Pacific. While established agricultural economies continue to lead in technology adoption, emerging markets in Latin America and the Middle East & Africa are showing rapid growth due to rising investments in precision agriculture infrastructure. Each region presents unique opportunities and challenges, shaped by local crop types, regulatory environments, and technological readiness.
The solution segment of the drone-based orchard canopy mapping market is broadly categorized into hardware, software, and services. The hardware segment comprises drones, cameras, sensors, and related components essential for aerial data capture. Recent years have witnessed significant enhancements in drone endurance, payload capacity, and sensor resolution, enabling more detailed and frequent canopy mapping. Hardware sales continue to account for a substantial portion of market revenue, particularly as orchard operators upgrade to newer models that offer improved efficiency and reliability. The proliferation of affordable, user-friendly drones has also democratized access to canopy mapping, encouraging adoption among smaller growers who previously found such technology cost-prohibitive.
Software solutions are increasingly becoming the linchpin of value creation in this market. Advanced mapping software leverages machine learning and artificial intelligence to process and interpret vast volumes of aerial imagery, transforming raw data into actionable insights. These platforms enable functionalities such as automated canopy segmentation, health assessment, and change detection over time. Cloud-based software offerings, in particular, are gain
https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy
The global mapping oblique camera market is experiencing robust growth, driven by increasing demand across diverse sectors. The market's expansion is fueled by several key factors: the rising need for high-resolution imagery in applications such as precision agriculture, urban planning, and infrastructure development; advancements in camera technology leading to improved image quality and efficiency; and the increasing adoption of UAVs (Unmanned Aerial Vehicles) for aerial photography, lowering costs and increasing accessibility. The forestry and mining industries are significant contributors, utilizing oblique imagery for detailed terrain mapping and resource management. Furthermore, the growth of smart cities initiatives is stimulating demand for detailed 3D city models, further propelling market expansion. We estimate the market size to be approximately $1.5 billion in 2025, with a Compound Annual Growth Rate (CAGR) of 12% projected through 2033. This growth is anticipated to be relatively consistent across regions, although North America and Europe will likely maintain a larger market share due to early adoption and advanced technological infrastructure. However, the market faces certain restraints. High initial investment costs for high-end oblique cameras can be a barrier to entry for smaller businesses. Data processing and analysis require specialized software and expertise, potentially limiting wider adoption. Furthermore, regulatory hurdles surrounding UAV usage and data privacy in some regions may impede market growth. Despite these challenges, the long-term outlook for the mapping oblique camera market remains positive. The ongoing development of more affordable and user-friendly systems, coupled with the increasing availability of cloud-based processing solutions, is expected to drive wider market penetration across diverse applications and geographical locations. The segmentation by camera type (half-frame, full-frame) reflects varying needs for image resolution and project scale. The competitive landscape is characterized by a mix of established players and emerging technology companies, fostering innovation and competition within the market.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global drone visible camera market is experiencing robust growth, driven by increasing demand across diverse sectors. The market's expansion is fueled by advancements in camera technology, offering higher resolutions, improved image stabilization, and enhanced functionalities like zoom and thermal imaging. Applications in agriculture (precision farming, crop monitoring), construction (site surveying, progress tracking), and environmental monitoring (wildlife observation, pollution detection) are key contributors to this growth. The integration of AI and machine learning capabilities further enhances the analytical potential of drone imagery, boosting market adoption. While the precise market size for 2025 is unavailable, a conservative estimate, considering a projected CAGR (let's assume a moderate 15% based on industry trends) and a base year value (let's estimate a $2 billion market size for 2025), would place the market at approximately $2.3 billion in 2026 and beyond, with further growth fueled by new applications in sectors like social media and military surveillance. The market is segmented by camera type (built-in vs. external) and application, with built-in cameras currently dominating due to their seamless integration. However, the external camera segment is anticipated to experience significant growth owing to its flexibility and compatibility with various drone models. Competitive landscape analysis reveals a mix of established players like Sony, Canon, and DJI alongside specialized drone camera manufacturers, resulting in a dynamic market with continuous innovation. Geographic distribution shows strong market presence in North America and Europe, fueled by early adoption and technological advancements. However, rapid growth is expected in the Asia-Pacific region, driven by increasing infrastructure development and rising demand across various applications, particularly in countries like China and India. While challenges remain, such as regulatory hurdles surrounding drone usage and concerns about data privacy and security, the overall outlook for the drone visible camera market remains highly positive, with significant growth potential over the forecast period. Further market penetration and technological improvements will continue to drive market expansion in the coming years.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data collection belongs to the AMT publication https://amt.copernicus.org/preprints/amt-2023-89/ with the title "Drone-based photogrammetry combined with deep-learning to estimate hail size distributions and melting of hail on the ground".
Abstract:
Hail is a major threat associated with severe thunderstorms and estimating the hail size is important for issuing warnings to the public. For the validation of existing, operational, radar-derived hail estimates, ground-based observations are necessary. Automatic hail sensors, as for example within the Swiss hail network, record the kinetic energy of hailstones to estimate the hail sizes. Due to the small size of the observational area of these sensors (0.2 m2), the full hail size distribution (HSD) cannot be retrieved. To address this issue, we apply a state-of-the-art custom trained deep-learning object detection model to drone-based aerial photogrammetric data to identify hailstones and estimate the HSD. Photogrammetric data of hail on the ground was collected for one supercell thunderstorm crossing central Switzerland from southwest to northeast in the afternoon of 20 June 2021. The hail swath of this intense right-moving supercell was intercepted a few minutes after the passage at a soccer field near Entlebuch (Canton Lucerne, Switzerland) and aerial images were taken by a commercial DJI drone, equipped with a 45 megapixel full frame camera system. The resulting images have a ground sampling distance (GSD) of 1.5 mm per pixel, defined by the focal length of 35 mm of the camera and a flight altitude of 12 m above ground. A 2D orthomosaic model of the survey area (750.4 m2) is created based on 116 captured images during the first drone mapping flight. Hail is then detected by using a region-based Convolutional Neural Network (Mask R-CNN). We first characterize the hail sizes based on the individual hail segmentation masks resulting from the model detections and investigate the performance by using manual hail annotations by experts to generate validation and test data sets. The final HSD, composed of 18207 hailstones, is compared with nearby automatic hail sensor observations, the operational weather radar based hail product MESHS (Maximum Expected Severe Hail Size) and crowdsourced hail reports. Based on the retrieved data set, a statistical assessment of sampling errors of hail sensors is carried out. Furthermore, five repetitions of the drone-based photogrammetry mission within 18.65 min facilitate investigations into the hail melting process on the ground.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
General Description
A manually annotated dataset, consisting of the video frames and segmentation masks, for segmentation of forest fire burned area based on a video captured by a UAV. A detailed explanation of the dataset generation is available in the open-access article "Burned area semantic segmentation: A novel dataset and evaluation using convolutional networks".
Data Collection
The BurnedAreaUAV dataset derives from a video captured at the coordinates' latitude 41° 23' 37.56" and longitude -7° 37' 0.32", at Torre do Pinhão, in northern Portugal in an area characterized by shrubby to herbaceous vegetation. The video was captured during the evolution of a prescribed fire using a DJI Phantom 4 PRO UAV equipped with an FC6310S RGB camera.
Video Overview
The video captures a prescribed fire where the burned area increases progressively. At the beginning of the sequence, a significant portion of the UAV's sensor field of view is already burned, and the burned area expands as time goes by. The video was collected by an RGB sensor installed on a drone while keeping the drone in a nearly static stationary stance during the data collection duration.
The video has about 15 minutes and a frame rate of 25 frames per second, amounting to 22500 images. It was collected by an RGB sensor installed on a drone while keeping the drone in a nearly static stationary stance during the data collection duration. Throughout this period, the progression of the burned area is observed. The original video has a resolution of 720×1280 and is stored in H.264 (or MPEG-4 Part 10) format. No audio signal was collected.
Manual Annotation
The annotation was done every 100 frames, which corresponds to a sampling period of 4 seconds. Two classes are considered: burned_area and unburned_area. This annotation has been done for the entire length of the video. The training set consists of 226 frame-image pairs and the test set of 23. The training and test annotations are offset by 50 frames.
We plan to expand this dataset in the future.
File Organization (BurnedAreaUAV_v1.rar)
The data is available in PNG, JSON (Labelme format), and WKT (segmentation masks only). The raw video data is also made available.
Concomitantly, photos were taken that allow to obtain metadata about the position of the drone, including height and coordinates, the orientation of the drone and the camera, and others. The geographic data regarding the location of the controlled fire are represented in a KML file that Google Earth and other geospatial software can read. We also provide two high-resolution orthophotos of the area of interest before and after burning.
The data produced by the segmentation models developed in "Burned area semantic segmentation: A novel dataset and evaluation using convolutional networks", comprising outputs in PNG and WKT formats, is also readily available upon request
BurnedAreaUAV_dataset_v1.rar
MP4_video (folder)
-- original_prescribed_burn_video.mp4
PNG (folder)
train (folder)
frames (folder)
-- frame_000000.png (raster image)
-- frame_000100.png
-- frame_000200.png
…
msks (folder)
-- mask_000000.png
-- mask_000100.png
-- mask_000200.png
…
test (folder)
frames (folder)
-- frame_020250.png
-- frame_020350.png
-- frame_020350.png
…
msks (folder)
-- mask_020250.png
-- mask_020350.png
-- mask_020350.png
…
JSON (folder)
-- train_valid_json (folder)
-- frame_000000.json (Labelme format)
-- frame_000100.json
-- frame_000200.json
-- frame_000300.json
…
-- test_json (folder)
-- frame_020250.json
-- frame_020350.json
-- frame_020450.json
…
WKT_files (folder)
-- train_valid.wkt (list of masks polygons)
-- test.wkt
UAV photos (metadata)
-- uav_photo1_metadata.JPG
-- uav_photo2_metadata.JPG
High resolution ortophoto files
-- odm_orthophoto_afterBurning.png
-- odm_orthophoto_beforeBurning.png
Keyhole Markup Language file (area under study polygon)
-- pinhao_cell_precribed_area.kml
Acknowledgements
This dataset results from activities developed in the context of partially projects funded by FCT - Fundação para a Ciência e a Tecnologia, I.P., through projects MIT-EXPL/ACC/0057/2021 and UIDB/04524/2020, and under the Scientific Employment Stimulus - Institutional Call - CEECINST/00051/2018.
The source code is available here.