Lesson: Collect photos with ArcGIS QuickCapture and explore them using a web app.When performing any kind of property or infrastructure inspection, photos are necessary for data capture and decision making, while GIS is integral for mapping and analysis. However, it can be difficult to incorporate photos taken from many different angles into GIS. Oriented Imagery, a widget compatible with multiple ArcGIS products, solves this problem by allowing you to manage and visualize imagery taken from any angle or vantage point. Oriented Imagery can also be used in damage assessments, environmental studies, asset inventories, or any GIS workflow that uses photo imagery.In this lesson, you'll perform a photographic site inspection using GIS. First, you'll create a feature layer to contain images collected in the field. Then, you'll make an ArcGIS QuickCapture project and enable Oriented Imagery so mobile workers can take photos of your site using their mobile devices. After capturing a few photos, you'll display them in an ArcGIS Experience Builder web app so you can explore the results from your computer.This lesson was last tested on February 22, 2022.View final resultRequirementsPublisher or Administrator role in an ArcGIS organization (get a free trial)ArcGIS QuickCaptureArcGIS Experience BuilderA smartphone or tablet device with Android 6.0 Marshmallow or later or iOS 12 or laterLesson PlanCapture imagery in the fieldCreate an ArcGIS QuickCapture project and enable Oriented Imagery to collect photos of your site.45 minutesInspect imagery in a web appBuild a web app with ArcGIS Experience Builder to display the captured photos.30 minutes
Oriented imagery catalog of data collected near IJzerlo, The Netherlands
Oriented Imagery Catalog for QuickCapture Project
This portion of the data release presents the raw aerial imagery collected during the unmanned aerial system (UAS) survey of the intertidal zone at West Whidbey Island, WA, on 2019-06-04. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. Flights using both a nadir camera orientation and an oblique camera orientation were conducted. For the nadir flights (F04, F05, F06, F07, and F08), the camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The UAS was flown on pre-programmed autonomous flight lines at an approximate altitude of 70 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.8 centimeters per pixel. The flight lines were oriented roughly shore-parallel and were spaced to provide approximately 70 percent overlap between images from adjacent lines. For the oblique orientation flights (F03, F09, F10, and F11), the camera was mounted using a fixed mount on the bottom of the UAS and oriented facing forward with a downward tilt. The UAS was flown manually in a sideways-facing orientation with the camera pointed toward the bluff. The camera was triggered at 1 Hz using a built-in intervalometer. After acquisition, the images were renamed to include flight number and acquisition time in the file name. The coordinates of the approximate image acquisition location were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 3,336 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.
Coast Train is a library of images of coastal environments, annotations, and corresponding thematic label masks (or ‘label images’) collated for the purposes of training and evaluating machine learning (ML), deep learning, and other models for image segmentation. It includes image sets from both geospatial satellite, aerial, and UAV imagery and orthomosaics, as well as non-geospatial oblique and nadir imagery. Images include a diverse range of coastal environments from the U.S. Pacific, Gulf of Mexico, Atlantic, and Great Lakes coastlines, consisting of time-series of high-resolution (≤1m) orthomosaics and satellite image tiles (10–30m). Each image, image annotation, and labelled image is available as a single NPZ zipped file. NPZ files follow the following naming convention: {datasource}{numberofclasses}{threedigitdatasetversion}.zip, where {datasource} is the source of the original images (for example, NAIP, Landsat 8, Sentinel 2), {numberofclasses} is the number of classes used to annotate the images, and {threedigitdatasetversion} is the three-digit code corresponding to the dataset version (in other words, 001 is version 1). Each zipped folder contains a collection of NPZ format files, each of which corresponds to an individual image. An individual NPZ file is named after the image that it represents and contains (1) a CSV file with detail information for every image in the zip folder and (2) a collection of the following NPY files: orig_image.npy (original input image unedited), image.npy (original input image after color balancing and normalization), classes.npy (list of classes annotated and present in the labelled image), doodles.npy (integer image of all image annotations), color_doodles.npy (color image of doodles.npy), label.npy (labelled image created from the classes present in the annotations), and settings.npy (annotation and machine learning settings used to generate the labelled image from annotations). All NPZ files can be extracted using the utilities available in Doodler (Buscombe, 2022). A merged CSV file containing detail information on the complete imagery collection is available at the top level of this data release, details of which are available in the Entity and Attribute section of this metadata file.
This web map presents a vector basemap of OpenStreetMap (OSM) data hosted by Esri. Esri created this vector tile basemap from the Daylight map distribution of OSM data, which is supported by Facebook and supplemented with additional data from Microsoft. It provides a reference layer featuring map labels, boundary lines, and roads and includes imagery. The OSM Daylight map will be updated every month with the latest version of OSM Daylight data. OpenStreetMap is an open collaborative project to create a free editable map of the world. Volunteers gather location data using GPS, local knowledge, and other free sources of information and upload it. The resulting free map can be viewed and downloaded from the OpenStreetMap site: www.OpenStreetMap.org. Esri is a supporter of the OSM project and is excited to make this enhanced vector basemap available to the ArcGIS user and developer communities.
This portion of the data release presents the raw aerial imagery collected during the uncrewed aerial system (UAS) survey conducted on the ocean beaches adjacent to the Columbia River Mouth at the Oregon-Washington border in July 2021. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. The camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The Fort Stevens State Park survey was conducted under Oregon Parks and Recreation Department Scientific Research Permit #235. Ten flights were conducted at Fort Stevens State Park on 22 July 2021, between 14:00 and 16:45 UTC (7:00 and 9:45 PDT). A total of 3,002 aerial images from these flights are presented in this data release; the images from the third flight (F03) were not utilized for data processing and are not included in the data release. The Benson Beach survey at Cape Disappointment State Park was conducted under Washington State Parks and Recreation Commission Scientific Research Permit #170603. Thirteen flights were conducted at Benson Beach on 23 July 2021, between 14:30 and 16:40 UTC (7:30 and 9:40 PDT). A total of 3,648 aerial images from these flights are presented in this data release; the images from the second to the fifth flight (F02 through F05) and the seventh flight (F07) were not utilized for data processing and are not included in the data release. All flights were conducted at an approximate altitude of 120 meters above ground level resulting in a nominal ground-sample-distance (GSD) of 3.2 centimeters per pixel. Before each flight, the camera’s digital ISO, aperture, and shutter speed were adjusted for ambient light conditions. For all flights the camera was triggered at 1 Hz using a built-in intervalometer. After acquisition, the images were renamed to include flight number and acquisition time in the file name. The coordinates of the approximate image acquisition locations were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. The data release includes a total of 6,650 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. To facilitate bulk download, the images from each flight are provided in a zip file named with the flight number. In addition to the provided zip files, the images are also available for browsing and individual download on the USGS Coastal and Marine Hazards and Resources Program Imagery Data System at the following URL: https://cmgds.marine.usgs.gov/idsviewer/data_release/10.5066-P9BVTVAW.
This map features the World Imagery map, focused on the Carribean region. World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. DigitalGlobe sub-meter imagery is featured in many parts of the world, including Africa. Sub-meter Pléiades imagery is available in select urban areas. Additionally, imagery at different resolutions has been contributed by the GIS User Community.
For more information on this map, view the World Imagery item description.
Metadata: This service is metadata-enabled. With the Identify tool in ArcMap or the World Imagery with Metadata web map, you can see the resolution, collection date, and source of the imagery at the location you click. Values of "99999" mean that metadata is not available for that field. The metadata applies only to the best available imagery at that location. You may need to zoom in to view the best available imagery.
Feedback: Have you ever seen a problem in the Esri World Imagery Map that you wanted to see fixed? You can use the Imagery Map Feedback web map to provide feedback on issues or errors that you see. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
This portion of the data release presents the raw aerial imagery collected during an Unmanned Aerial System (UAS) survey of the intertidal zone at Puget Creek and Dickman Mill Park, Tacoma, WA, on 2019-06-03. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. The camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The UAS was flown on pre-programmed autonomous flight lines at an approximate altitude of 50 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.3 centimeters per pixel. The flight lines were oriented roughly shore-parallel and were spaced to provide approximately 70 percent overlap between images from adjacent lines. The camera was triggered at 1 Hz using a built-in intervalometer. Flight F01 covered the Puget Creek area; flight F02 covered the Dickman Mill Park area. After acquisition, the images were renamed to include the flight number and acquisition time in the file name. The coordinates of the approximate image acquisition locations were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 1,171 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.
Oriented aerial images are digital aerial images with all the necessary parameters for a stereoscopic evaluation, i.e. data for internal and external orientation (coordinates in position and height of the image, as well as the angle of rotation of the camera when triggering). The spatial mapping of digital aerial images is possible on the basis of image center overviews.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Band correlation matrix of study area.
This portion of the data release presents the raw aerial imagery collected during the Unmanned Aerial System (UAS) survey of the Liberty Island Conservation Bank Wildlands restoration site in the Sacramento-San Joaquin Delta on 2018-10-23. The imagery was acquired using two Department of Interior owned 3DR Solo quadcopters fitted with Ricoh GR II digital cameras featuring global shutters. The cameras were mounted using a fixed mount on the bottom of the UAS and oriented in a roughly nadir orientation. The UAS were flown on pre-programmed autonomous flight lines at an approximate altitude of 120 meters above-ground-level, resulting in a nominal ground-sample-distance (GSD) of 3.2 centimeters per-pixel. The flight lines were oriented roughly east-west and were spaced to provide approximately 66 percent overlap between images from adjacent lines. The cameras were triggered at 1 Hz using a built in intervalometer. After acquisition, the images were renamed to include flight number and acquisition time in the file name. The coordinates of the approximate image acquisition location were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 3,567 JPG images. Images from takeoff and landing sequences were not used for processing, and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The OIF of study area image.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Quantitative index of enhancement results.
This portion of the data release presents the raw aerial imagery collected during an Unmanned Aerial System (UAS) survey of the intertidal zone at Post Point, Bellingham Bay, WA, on 2019-06-06. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. The camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The UAS was flown on pre-programmed autonomous flight lines which were oriented roughly shore-parallel and were spaced to provide approximately 70 percent overlap between images from adjacent lines. Three flights (F01, F02, F03) covering the survey area were conducted at an approximate altitude of 70 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.8 centimeters per pixel. Two additional flights (F04, which was aborted early and not included in this data release, and F05) were conducted over a smaller area within the main survey area at an approximate altitude of 35 meters AGL, resulting in a nominal GSD of 0.9 centimeters per pixel. The camera was triggered at 1 Hz using a built-in intervalometer. After acquisition, the images were renamed to include the flight number and acquisition time in the file name. The coordinates of the approximate image acquisition locations were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 1,662 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.
This data release contains six zipped raster files of thermal infrared (TIR) images of the South Loup River, North Loup River, and Dismal River named as LowerSouthLoup_AerialTIRImage_1m_2015.zip, MiddleSouthLoup_AerialTIRImage_50cm_2015.zip, UpperSouthLoup_AerialTIRImage_30cm_2015.zip, LowerDismal_AerialTIRImage_1m_2016.zip, UpperDismal_AerialTIRImage_50cm_2015.zip, and NorthLoup_AerialTIRImage_1m_2016.zip. This data release also includes a Reconn_Temperature_Gradient_X_sections.zip file which contains three ASCII comma separated values files with stream reconnaissance data which include stream temperature, streambed temperature, and vertical hydraulic gradient. This dataset also includes a Focused_discharge_points.zip file which contains five point shapefiles of interpreted focused groundwater discharge (groundwater discharge as springs) locations along the South Loup, North Fork of the South Loup, North Loup, and Dismal Rivers in Nebraska. The last file included in this data release is Centerline.zip which contains one line shapefile of the stream reach centerlines. Further information about each of these datasets can be found in their associated metadata files.
Description:
This dataset consists of meticulously annotated images of tire side profiles, specifically designed for image segmentation tasks. Each tire has been manually labeled to ensure high accuracy, making this dataset ideal for training machine learning models focused on tire detection, classification, or related automotive applications.
The annotations are provided in the YOLO v5 format, leveraging the PyTorch framework for deep learning applications. The dataset offers a robust foundation for researchers and developers working on object detection, autonomous vehicles, quality control, or any project requiring precise tire identification from images.
Download Dataset
Data Collection and Labeling Process:
Manual Labeling: Every tire in the dataset has been individually labeled to guarantee that the annotations are highly precise, significantly reducing the margin of error in model training.
Annotation Format: YOLO v5 PyTorch format, a highly efficient and widely used format for real-time object detection systems.
Pre-processing Applied:
Auto-orientation: Pixel data has been automatically oriented, and EXIF orientation metadata has been stripped to ensure uniformity across all images, eliminating issues related to
image orientation during processing.
Resizing: All images have been resized to 416×416 pixels using stretching to maintain compatibility with common object detection frameworks like YOLO. This resizing standardizes the image input size while preserving visual integrity.
Applications:
Automotive Industry: This dataset is suitable for automotive-focused AI models, including tire quality assessment, tread pattern recognition, and autonomous vehicle systems.
Surveillance and Security: Use cases in monitoring systems where identifying tires is crucial for vehicle recognition in parking lots or traffic management systems.
Manufacturing and Quality Control: Can be used in tire manufacturing processes to automate defect detection and classification.
Dataset Composition:
Number of Images: [Add specific number]
File Format: JPEG/PNG
Annotation Format: YOLO v5 PyTorch
Image Size: 416×416 (standardized across all images)
This dataset is sourced from Kaggle.
Map Template for Oriented Imagery.
The automated recognition of different vehicle classes and their orientation on aerial images is an important task in the field of traffic research and also finds applications in disaster management, among other things. For the further development of corresponding algorithms that deliver reliable results not only under laboratory conditions but also in real scenarios, training data sets that are as extensive and versatile as possible play a decisive role. For this purpose, we present our dataset EAGLE (oriEnted vehicle detection using Aerial imaGery in real-worLd scEnarios).
The EAGLE dataset is used to detect vehicles of different classes including vehicle orientation based on aerial images. It contains high-resolution aerial images covering different real-world situations with different acquisition sensors, angles and times, flight altitudes, resolutions (5-45 cm ground pixel size), weather and lighting conditions, as well as urban and rural acquisition regions, acquired between 2006 and 2019. EAGLE contains 215,986 annotated vehicles on 318 aerial images for small vehicles (cars, vans, transporters, SUVs, ambulances, police vehicles) and large vehicles (trucks, large trucks, minibuses, buses, fire engines, construction vehicles, trailers) including oriented bounding boxes defined by four points. The annotation contains the respective coordinates of all four vehicle corners, as well as a degree of orientation between 0° and 360° indicating the angle of the vehicle tip. In addition, for each example, the visibility (fully/partially/weakly visible) and the detectability of the vehicle's orientation (clear/unclear) are indicated.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The extraction accuracy corresponding to its optimal segmentation parameters.
Lesson: Collect photos with ArcGIS QuickCapture and explore them using a web app.When performing any kind of property or infrastructure inspection, photos are necessary for data capture and decision making, while GIS is integral for mapping and analysis. However, it can be difficult to incorporate photos taken from many different angles into GIS. Oriented Imagery, a widget compatible with multiple ArcGIS products, solves this problem by allowing you to manage and visualize imagery taken from any angle or vantage point. Oriented Imagery can also be used in damage assessments, environmental studies, asset inventories, or any GIS workflow that uses photo imagery.In this lesson, you'll perform a photographic site inspection using GIS. First, you'll create a feature layer to contain images collected in the field. Then, you'll make an ArcGIS QuickCapture project and enable Oriented Imagery so mobile workers can take photos of your site using their mobile devices. After capturing a few photos, you'll display them in an ArcGIS Experience Builder web app so you can explore the results from your computer.This lesson was last tested on February 22, 2022.View final resultRequirementsPublisher or Administrator role in an ArcGIS organization (get a free trial)ArcGIS QuickCaptureArcGIS Experience BuilderA smartphone or tablet device with Android 6.0 Marshmallow or later or iOS 12 or laterLesson PlanCapture imagery in the fieldCreate an ArcGIS QuickCapture project and enable Oriented Imagery to collect photos of your site.45 minutesInspect imagery in a web appBuild a web app with ArcGIS Experience Builder to display the captured photos.30 minutes