63 datasets found
  1. Damage Classification Deep Learning Model for Vexcel Imagery- Maui Fires

    • hub.arcgis.com
    Updated Aug 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Imagery Virtual Team (2023). Damage Classification Deep Learning Model for Vexcel Imagery- Maui Fires [Dataset]. https://hub.arcgis.com/content/30e3f11be84b418fa4dcb109a1eac6d6
    Explore at:
    Dataset updated
    Aug 18, 2023
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Imagery Virtual Team
    Area covered
    Maui
    Description

    Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS ProArcGIS Enterprise – ArcGIS Image Server with raster analytics configuredArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelBefore using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.Input1. 8-bit, 3-band high-resolution (10 cm) imagery. The model was trained on 10 cm Vexcel imagery2. Building footprints feature classOutputFeature class containing classified building footprints. Classname field value 1 indicates damaged buildings, and value 2 corresponds to undamaged structuresApplicable geographiesThe model was specifically trained and tested over Maui, Hawaii, in response to the Maui fires in August 2023.Accuracy metricsThe model has an average accuracy of 0.96.Sample resultsResults of the models can be seen in this dashboard.

  2. d

    Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro [Dataset]. https://catalog.data.gov/dataset/introduction-to-planetary-image-analysis-and-geologic-mapping-in-arcgis-pro
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    GIS project files and imagery data required to complete the Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro tutorial. These data cover the area in and around Jezero crater, Mars.

  3. Imagery Viewer (Mature)

    • data-salemva.opendata.arcgis.com
    • noveladata.com
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    esri_en (2018). Imagery Viewer (Mature) [Dataset]. https://data-salemva.opendata.arcgis.com/items/995733183b754cf68a57c020211700cf
    Explore at:
    Dataset updated
    Jun 26, 2018
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    esri_en
    Description

    Imagery Viewer is a configurable app template for visualizing and exploring imagery through time and space, and includes tools for navigating through time, recording locations, measurement, and more. A one-image configuration lets users focus on a single imagery layer, while a two-image configuration lets users compare two imagery layers using a swipe tool.Imagery Viewer users can do the following:Visualize imagery layers (and non-imagery layers) from the app’s web mapExplore an imagery layer through time for an area of interestZoom to bookmarked areas of interest (or bookmark their own)Select specific images from a layer to visualizeAnnotate imagery using editable feature layersPerform image measurement on imagery layers that have mensuration capabilitiesExport an imagery layer to the user's local machine, or as layer in the user’s ArcGIS accountUse CasesA student investigating urban expansion over time A farmer using NAIP imagery to visualize his land and record crop typesAn image analyst recording the location of an aircraft identified from high resolution satellite imageryA property appraiser recording notes about newly constructed houses, including calculating building heights in-appSupported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsCreating an app with this template requires a web map with at least one imagery layer.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

  4. g

    Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro...

    • gimi9.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_introduction-to-planetary-image-analysis-and-geologic-mapping-in-arcgis-pro/
    Explore at:
    Description

    The metadata original format

  5. a

    Fort McMurray Landsat project package

    • edu.hub.arcgis.com
    Updated Nov 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Education and Research (2022). Fort McMurray Landsat project package [Dataset]. https://edu.hub.arcgis.com/content/e93aea60e75b4571b048aba2f5606904
    Explore at:
    Dataset updated
    Nov 7, 2022
    Dataset authored and provided by
    Education and Research
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Area covered
    Fort McMurray
    Description

    The ArcGIS system provides access to both imagery and tools for visualizing and analyzing imagery. Imagery collections from the ArcGIS Living Atlas of the World can be viewed through apps such as the Landsat Explorer app, ArcGIS Online Map Viewer, and ArcGIS Pro, while the Spatial Analyst extension and ArcGIS Image Analyst for ArcGIS Pro, more commonly know as the Image Analyst extension, provide raster functions, classification and change detection tools, and other advanced image interpretation and analysis tools. The tutorials in the Working with Imagery in ArcGIS learning path will introduce you to exploring and selecting imagery in ArcGIS web applications, applying indices and raster functions to imagery in ArcGIS Pro, and performing image classification and change detection in ArcGIS Pro.This ArcGIS Pro project package contains data for Tutorial 3, Performing Image Classification in ArcGIS Pro, and Tutorial 4, Performing Change Detection in ArcGIS Pro, of the learning path. Click Download to download the .ppkx file or click Open in ArcGIS Pro then open the pitemx file to download and open the package.Software Used: ArcGIS Pro 2.8. Project package may be opened in 3.x versions.File Size: 170mbDate Created: November 7, 2022Last Tested: December 5, 2024

  6. Image Mask (Deprecated)

    • noveladata.com
    Updated Jun 27, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    esri_en (2018). Image Mask (Deprecated) [Dataset]. https://www.noveladata.com/items/59486ebf228f4661aeaecb770dd73de8
    Explore at:
    Dataset updated
    Jun 27, 2018
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    esri_en
    Description

    Image Mask is a configurable app template for identifying areas of an image that have changed over time or that meet user-set thresholds for calculated spectral indexes. The template also includes tools for measurement, recording locations, and more.App users can zoom to bookmarked areas of interest (or search for their own), select any of the imagery layers from the associated web map to analyze, use a time slider or dropdown menu to select images, then choose between the Change Detection or Mask tools to produce results.Image Mask users can do the following:Zoom to bookmarked areas of interest (or bookmark their own)Select specific images from a layer to visualize (search by date or another attribute)Use the Change Detection tool to compare two images in a layer (see options, below)Use the Mask tool to highlight areas that meet a user-set threshold for common spectral indexes (NDVI, SAVI, a burn index, and a water index). For example, highlight all the areas in an image with NDVI values above 0.25 to find vegetation.Annotate imagery using editable feature layersPerform image measurement on imagery layers that have mensuration capabilitiesExport an imagery layer to the user's local machine, or as a layer in the user’s ArcGIS accountUse CasesA student investigating urban expansion over time using Esri’s Multispectral Landsat image serviceA farmer using NAIP imagery to examine changes in crop healthAn image analyst recording burn scar extents using satellite imageryAn aid worker identifying regions with extreme drought to focus assistanceChange detection methodsFor each imagery layer, give app users one or more of the following change detection options:Image Brightness (calculates the change in overall brightness)Vegetation Index (NDVI) (requires red and infrared bands)Soil-Adjusted Vegetation Index (SAVI) (requires red and infrared bands)Water Index (requires green and short-wave infrared bands)Burn Index (requires infrared and short-wave infrared bands)For each of the indexes, users also have a choice between three modes:Difference Image: calculates increases and decreases for the full extent Difference Mask: users can focus on significant change by setting the minimum increase or decrease to be masked—for example, a user could mask only areas where NDVI increased by at least 0.2Threshold Mask: The user sets a threshold and magnitude for what is masked as change. The app will only identify change that’s above the user-set lower threshold and bigger than the user-set minimum magnitude.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsCreating an app with this template requires a web map with at least one imagery layer.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

  7. Damage Classification Deep Learning Model for Airbus Imagery- Maui Fires

    • esri-disasterresponse.hub.arcgis.com
    Updated Aug 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Imagery Virtual Team (2023). Damage Classification Deep Learning Model for Airbus Imagery- Maui Fires [Dataset]. https://esri-disasterresponse.hub.arcgis.com/content/98b5f2ac57104432a2bd9f278022c503
    Explore at:
    Dataset updated
    Aug 17, 2023
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Imagery Virtual Team
    Area covered
    Maui
    Description

    Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS ProArcGIS Enterprise – ArcGIS Image Server with raster analytics configuredArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelBefore using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.Input1. 8-bit, 3-band high-resolution (50 cm) imagery. The model was trained on 50 cm Airbus imagery2. Building footprints feature classOutputFeature class containing classified building footprints. Classname field value 1 indicates damaged buildings, and value 2 corresponds to undamaged structuresApplicable geographiesThe model was specifically trained and tested over Maui, Hawaii, in response to the Maui fires in August 2023.Accuracy metricsThe model has an average accuracy of 0.96.Sample resultsResults of the model can be seen in this dashboard.

  8. d

    Forest Inventory and Analysis Standing Dead Forest Carbon (Image Service)

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    • +5more
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2025). Forest Inventory and Analysis Standing Dead Forest Carbon (Image Service) [Dataset]. https://catalog.data.gov/dataset/forest-inventory-and-analysis-standing-dead-forest-carbon-image-service-ade64
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    U.S. Forest Service
    Description

    Through application of a nearest-neighbor imputation approach, mapped estimates of forest carbon density were developed for the contiguous United States using the annual forest inventory conducted by the USDA Forest Service Forest Inventory and Analysis (FIA) program, MODIS satellite imagery, and ancillary geospatial datasets. This data product contains the following 8 raster maps: total forest carbon in all stocks, live tree aboveground forest carbon, live tree belowground forest carbon, forest down dead carbon, forest litter carbon, forest standing dead carbon, forest soil organic carbon, and forest understory carbon. The paper on which these maps are based may be found here: https://dx.doi.org/10.2737/RDS-2013-0004Access to full metadata and other information can be accessed here: https://dx.doi.org/10.2737/RDS-2013-0004

  9. d

    Low-head Dam ArcGIS Deep Learning Image Analysis

    • search.dataone.org
    Updated Apr 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kristina Roller (2022). Low-head Dam ArcGIS Deep Learning Image Analysis [Dataset]. https://search.dataone.org/view/sha256%3Af60d09ceb6984908feab039c6f17e84cb371e849c4f37dff92c1d0662a423d6e
    Explore at:
    Dataset updated
    Apr 15, 2022
    Dataset provided by
    Hydroshare
    Authors
    Kristina Roller
    Description
  10. Data from: Comparative Analysis of different Machine Learning Algorithms for...

    • figshare.com
    pptx
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Baoling Gui (2024). Comparative Analysis of different Machine Learning Algorithms for Urban Footprint Extraction in Diverse Urban Contexts Using High-Resolution Remote Sensing Imagery [Dataset]. http://doi.org/10.6084/m9.figshare.26379301.v2
    Explore at:
    pptxAvailable download formats
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Baoling Gui
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data involved in this paper is from https://www.planet.com/explorer/. The resolution is 3m, and there are 3 main bands, RGB. Since the platform can only download a certain amount of data after applying for an account in the form of education, and at the same time the data is only retained for one month, we chose 8 major cities for the study, 2 images per city. we also provide detailed information on the data visualization and classification results that we have tested and retained in a PPT file called paper, we also provide detailed information on the data visualization and classification results of our tests in a PPT file called paper-result, which can be easily reviewed by reviewers. At the same time, reviewers can also download the data to verify the applicability of the results based on the coordinates of the data sources provided in this paper.The algorithms consist of three main types, one is based on traditional algorithms including object-based and pixel-based, in which we tested the generalization ability of four classifiers, including Random Forest, Support Vector Machine, Maximum Likelihood, and K-mean, in the form of classification in this different way. In addition, we tested two of the more mainstream deep learning classification algorithms, U-net and deeplabV3, both of which can be found and applied in the ArcGIS pro software. The traditional algorithms can be found by checking https://pro.arcgis.com/en/pro-app/latest/help/analysis/image-analyst/the-image-classification-wizard.htm to find the running process, while the related parameter settings and Sample selection rules can be found in detail in the article. Deep learning algorithms can be found at https://pro.arcgis.com/en/pro-app/latest/help/analysis/deep-learning/deep-learning-in-arcgis-pro.htm, and the related parameter settings and sample selection rules can be found in detail in the article. Finally, the big model is based on the SAM model, in which the running process of SAM is from https://github.com/facebookresearch/segment-anything, and you can also use the official Meta segmentation official website to provide a web-based segmentation platform for testing https:// segment-anything.com/. However, the official website has restrictions on the format of the data and the scope of processing.

  11. Image Visit (Deprecated)

    • data-salemva.opendata.arcgis.com
    • noveladata.com
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    esri_en (2018). Image Visit (Deprecated) [Dataset]. https://data-salemva.opendata.arcgis.com/items/eacb69e729ee40d5b71c0c6ef0d8980d
    Explore at:
    Dataset updated
    Jun 26, 2018
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    esri_en
    Description

    Image Visit is a configurable app template that allows users to quickly review the attributes of a predetermined sequence of locations in imagery. The app optimizes workflows by loading the next image while the user is still viewing the current image, reducing the delay caused by waiting for the next image to be returned from the server.Image Visit users can do the following:Navigate through a predetermined sequence of locations two ways: use features in a 'Visit' layer (an editable hosted feature layer), or use a web map's bookmarks.Use an optional 'Notes' layer (a second editable hosted feature layer) to add or edit features associated with the Visit locations.If the app uses a Visit layer for navigation, users can edit an optional 'Status' field to set the status of each Visit location as it's processed ('Complete' or 'Incomplete,'' for example).View metadata about the Imagery, Visit, and Notes layers in a dialog window (which displays information based on each layer's web map popup settings).Annotate imagery using editable feature layersPerform image measurement on imagery layers that have mensuration capabilitiesExport an imagery layer to the user's local machine, or as layer in the user’s ArcGIS accountUse CasesAn insurance company checking properties. An insurance company has a set of properties to review after an event like a hurricane. The app would drive the user to each property, and allow the operator to record attributes (the extent of damage, for example). Image analysts checking control points. Organizations that collect aerial photography often have a collection of marked or identifiable control points that they use to check their photographs. The app would drive the user to each of the known points, at a suitable scale, then allow the user to validate the location of the control point in the image. Checking automatically labeled features. In cases where AI is used for object identification, the app would drive the user to identified features to review/correct the classification. Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsCreating an app with this template requires a web map with at least one imagery layer.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageClick the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

  12. Use Deep Learning to Assess Palm Tree Health

    • hub.arcgis.com
    Updated Mar 14, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Tutorials (2019). Use Deep Learning to Assess Palm Tree Health [Dataset]. https://hub.arcgis.com/documents/d50cea3d161542b681333f1bc265029a
    Explore at:
    Dataset updated
    Mar 14, 2019
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Tutorials
    Description

    Coconuts and coconut products are an important commodity in the Tongan economy. Plantations, such as the one in the town of Kolovai, have thousands of trees. Inventorying each of these trees by hand would require lots of time and manpower. Alternatively, tree health and location can be surveyed using remote sensing and deep learning. In this lesson, you'll use the Deep Learning tools in ArcGIS Pro to create training samples and run a deep learning model to identify the trees on the plantation. Then, you'll estimate tree health using a Visible Atmospherically Resistant Index (VARI) calculation to determine which trees may need inspection or maintenance.

    To detect palm trees and calculate vegetation health, you only need ArcGIS Pro with the Image Analyst extension. To publish the palm tree health data as a feature service, you need ArcGIS Online and the Spatial Analyst extension.

    In this lesson you will build skills in these areas:

    • Creating training schema
    • Digitizing training samples
    • Using deep learning tools in ArcGIS Pro
    • Calculating VARI
    • Extracting data to points

    Learn ArcGIS is a hands-on, problem-based learning website using real-world scenarios. Our mission is to encourage critical thinking, and to develop resources that support STEM education.

  13. r

    Solar Panel Detection NZ Model

    • opendata.rcmrd.org
    Updated Feb 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Water and Atmospheric Research (2022). Solar Panel Detection NZ Model [Dataset]. https://opendata.rcmrd.org/content/75b27dd904d34659bf6021689fa975e4
    Explore at:
    Dataset updated
    Feb 9, 2022
    Dataset authored and provided by
    National Institute of Water and Atmospheric Research
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    New Zealand
    Description

    This is a fine-tuned model for New Zealand, derived from a pre-trained model from Esri. It has been trained using LINZ aerial imagery (0.075 m spatial resolution) for Wellington You can see its output in this app https://niwa.maps.arcgis.com/home/item.html?id=1ca4ee42a7f44f02a2adcf198bc4b539Solar power is environment friendly and is being promoted by government agencies and power distribution companies. Government agencies can use solar panel detection to offer incentives such as tax exemptions and credits to residents who have installed solar panels. Policymakers can use it to gauge adoption and frame schemes to spread awareness and promote solar power utilization in areas that lack its use. This information can also serve as an input to solar panel installation and utility companies and help redirect their marketing efforts.Traditional ways of obtaining information on solar panel installation, such as surveys and on-site visits, are time consuming and error-prone. Deep learning models are highly capable of learning complex semantics and can produce superior results. Use this deep learning model to automate the task of solar panel detection, reducing time and effort required significantly.Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS Proor ArcGIS Enterprise – ArcGIS Image Server with Raster Analytics configuredor ArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelFollow the Esri guide to using their USA Solar Panel detection model (https://www.arcgis.com/home/item.html?id=c2508d72f2614104bfcfd5ccf1429284). Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputHigh resolution (5-15 cm) RGB imageryOutputFeature class containing detected solar panelsApplicable geographiesThe model is expected to work well in New ZealandModel architectureThis model uses the MaskRCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 0.9244444449742635NOTE: Use at your own risk_Item Page Created: 2022-02-09 02:24 Item Page Last Modified: 2025-04-05 16:30Owner: NIWA_OpenData

  14. a

    11.1 Image Processing with ArcGIS

    • hub.arcgis.com
    Updated Mar 4, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iowa Department of Transportation (2017). 11.1 Image Processing with ArcGIS [Dataset]. https://hub.arcgis.com/documents/94eb7b83c4d2486e9cca3985f5a7987b
    Explore at:
    Dataset updated
    Mar 4, 2017
    Dataset authored and provided by
    Iowa Department of Transportation
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Imagery is processed and used for a wide variety of geospatial applications, including geographic context, visualization, and analysis. You may want to apply processing techniques on image data, visually interpret the data, use it as a background to aid interpretation of other data, or use it for analysis. In this course, you will use tools in ArcGIS to perform basic image processing. You will learn how to dynamically modify properties that enhance image display, visualize surface features, and create multiple products.After completing this course, you will be able to:Describe common types of image processing used for analysis.Relate the access of imagery to decisions in processing.Apply on-the-fly display techniques to enhance imagery.Use image-processing functions to modify images for analysis.

  15. Multispectral Landsat

    • cacgeoportal.com
    • uneca.africageoportal.com
    • +8more
    Updated Mar 19, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2015). Multispectral Landsat [Dataset]. https://www.cacgeoportal.com/datasets/d9b466d6a9e647ce8d1dd5fe12eb434b
    Explore at:
    Dataset updated
    Mar 19, 2015
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    This layer includes Landsat GLS, Landsat 8, and Landsat 9 imagery for use in visualization and analysis. This layer is time enabled and includes a number band combinations and indices rendered on demand. The Landsat 8 and 9 imagery includes nine multispectral bands from the Operational Land Imager (OLI) and two bands from the Thermal Infrared Sensor (TIRS). It is updated daily with new imagery directly sourced from the USGS Landsat collection on AWS.Geographic CoverageGlobal Land Surface.Polar regions are available in polar-projected Imagery Layers: Landsat Arctic Views and Landsat Antarctic Views.Temporal CoverageThis layer is updated daily with new imagery.Working in tandem, Landsat 8 and 9 revisit each point on Earth's land surface every 8 days.Most images collected from January 2015 to present are included.Approximately 5 images for each path/row from 2013 and 2014 are also included.This layer also includes imagery from the Global Land Survey* (circa 2010, 2005, 2000, 1990, 1975).Product LevelThe Landsat 8 and 9 imagery in this layer is comprised of Collection 2 Level-1 data.The imagery has Top of Atmosphere (TOA) correction applied.TOA is applied using the radiometric rescaling coefficients provided the USGS.The TOA reflectance values (ranging 0 – 1 by default) are scaled using a range of 0 – 10,000.Image Selection/FilteringA number of fields are available for filtering, including Acquisition Date, Estimated Cloud Cover, and Product ID.To isolate and work with specific images, either use the ‘Image Filter’ to create custom layers or add a ‘Layer Filter’ to restrict the default layer display to a specified image or group of images.To isolate a specific mission, use the Layer Filter and the dataset_id or SensorName fields.Visual RenderingThe default rendering in this layer is Agriculture (bands 6,5,2) with Dynamic Range Adjustment (DRA). Brighter green indicates more vigorous vegetation.The DRA version of each layer enables visualization of the full dynamic range of the images.Rendering (or display) of band combinations and calculated indices is done on-the-fly from the source images via Raster Functions.Various pre-defined Raster Functions can be selected or custom functions can be created.Pre-defined functions: Natural Color with DRA, Agriculture with DRA, Geology with DRA, Color Infrared with DRA, Bathymetric with DRA, Short-wave Infrared with DRA, Normalized Difference Moisture Index Colorized, NDVI Raw, NDVI Colorized, NBR Raw15 meter Landsat Imagery Layers are also available: Panchromatic and Pansharpened.Multispectral Bands

    Band

    Description

    Wavelength (µm)

    Spatial Resolution (m)

    1

    Coastal aerosol

    0.43 - 0.45

    30

    2

    Blue

    0.45 - 0.51

    30

    3

    Green

    0.53 - 0.59

    30

    4

    Red

    0.64 - 0.67

    30

    5

    Near Infrared (NIR)

    0.85 - 0.88

    30

    6

    SWIR 1

    1.57 - 1.65

    30

    7

    SWIR 2

    2.11 - 2.29

    30

    8

    Cirrus (in OLI this is band 9)

    1.36 - 1.38

    30

    9

    QA Band (available with Collection 1)*

    NA

    30

    *More about the Quality Assessment BandTIRS Bands

    Band

    Description

    Wavelength (µm)

    Spatial Resolution (m)

    10

    TIRS1

    10.60 - 11.19

    100 * (30)

    11

    TIRS2

    11.50 - 12.51

    100 * (30)

    *TIRS bands are acquired at 100 meter resolution, but are resampled to 30 meter in delivered data product.Additional Usage NotesImage exports are limited to 4,000 columns x 4,000 rows per request.This dynamic imagery layer can be used in Web Maps and ArcGIS Pro as well as web and mobile applications using the ArcGIS REST APIs.WCS and WMS compatibility means this imagery layer can be consumed as WCS or WMS services.The Landsat Explorer App is another way to access and explore the imagery.Data SourceLandsat imagery is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). Data is hosted in Amazon Web Services as part of their Public Data Sets program.For information, see Landsat 8 and Landsat 9.*The Global Land Survey includes images from Landsat 1 through Landsat 7. Band numbers and band combinations differ from those of Landsat 8, but have been mapped to the most appropriate band as in the above table. For more information about the Global Land Survey, visit GLS.

  16. Data from: An ArcGIS Pro workflow to extract vegetation indices from aerial...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    tiff, txt
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amy Wilber; Amy Wilber; Joby M.P. Czarnecki; James D. McCurdy; James D. McCurdy; Joby M.P. Czarnecki (2024). An ArcGIS Pro workflow to extract vegetation indices from aerial imagery of small‐plot turfgrass research [Dataset]. http://doi.org/10.5061/dryad.r4xgxd2dk
    Explore at:
    txt, tiffAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Amy Wilber; Amy Wilber; Joby M.P. Czarnecki; James D. McCurdy; James D. McCurdy; Joby M.P. Czarnecki
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Collection of multispectral imagery from an aerial sensor is a means to obtain plot-level vegetation index (VI) values; however, post-capture image processing and analysis remain a challenge for small-plot researchers. An ArcGIS Pro workflow of two task items was developed with established routines and commands to extract plot-level VI values (Normalized Difference VI, Ratio VI, and Chlorophyll Index-Red Edge) from multispectral aerial imagery of small-plot turfgrass experiments. Users can access and download task item(s) from the ArcGIS Online platform for use in ArcGIS Pro. The workflow standardizes the processing of aerial imagery to ensure repeatability between sampling dates and across site locations. A guided workflow saves time with assigned commands, ultimately allowing users to obtain a table with plot descriptions and index values within a .csv file for statistical analysis. The workflow was used to analyze aerial imagery from a small-plot turfgrass research study evaluating herbicide effects on St. Augustinegrass [Stenotaphrum secundatum (Walt.) Kuntze] grow-in. To compare methods, index values were extracted from the same aerial imagery by TurfScout, LLC and were obtained by handheld sensor. Index values from the three methods were correlated with visual percentage cover to determine the sensitivity (i.e., the ability to detect differences) of the different methodologies.

  17. a

    Car Detection - New Zealand

    • sdiinnovation-geoplatform.hub.arcgis.com
    • pacificgeoportal.com
    Updated Oct 6, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eagle Technology Group Ltd (2022). Car Detection - New Zealand [Dataset]. https://sdiinnovation-geoplatform.hub.arcgis.com/datasets/eaglegis::car-detection-new-zealand
    Explore at:
    Dataset updated
    Oct 6, 2022
    Dataset authored and provided by
    Eagle Technology Group Ltd
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This New Zealand car detection Deep Learning Package will detect cars from high resolution imagery. This model is re-trained from the Esri Car Detection - USA Deep Learning Package and is trained to work better within the New Zealand geography.The model precision had also improved from 0.81 to 0.89. The package is trained to be more aggressive in terms of car detecting and is able to detect most cars that are fully covered in shade or partially blocked by tree canopy. This deep learning model is used to detect cars in high resolution drone or aerial imagery. Car detection can be used for applications such as traffic management and analysis, parking lot utilization, urban planning, etc. It can also be used as a proxy for deriving economic indicators and estimating retail sales. High resolution aerial and drone imagery can be used for car detection due to its high spatio-temporal coverage.Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst and ArcGIS 3D Analyst extensions for ArcGIS ProArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputHigh resolution RGB imagery (7.5 centimetre spatial resolution)OutputFeature class containing detected carsApplicable geographiesThe model is expected to work well with the New Zealand localised data.Model architectureThis model uses the MaskRCNN model architecture implemented in ArcGIS Pro Arcpy.Accuracy metricsThis model has an average precision score of 0.89.Sample resultsHere are a few results from the model.(Post processing are recommended to filter out False Positive Object.e.g (confidence >= x | 0.95) |& ((shape_area/shape_length) >= x | 0.5) |& (class == Car) |& Regularize(feature)3% of detected object will need to be filtered out averagely .To learn how to use this model, see this story

  18. f

    Data from: Object-Based Image Analysis for Detection of Japanese Knotweed...

    • figshare.com
    pdf
    Updated Dec 20, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Jones; Stephen Pike; Malcolm Thomas; Denis Murphy (2016). Object-Based Image Analysis for Detection of Japanese Knotweed s.l. taxa (Polygonaceae) in Wales (UK) [Dataset]. http://doi.org/10.6084/m9.figshare.4483463.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 20, 2016
    Dataset provided by
    figshare
    Authors
    Daniel Jones; Stephen Pike; Malcolm Thomas; Denis Murphy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United Kingdom, Wales
    Description

    Japanese Knotweed s.l. taxa are amongst the most aggressive vascular plant Invasive Alien Species (IAS) in the world. These taxa form dense, suppressive monocultures and are persistent, pervasive invaders throughout the more economically developed countries (MEDCs) of the world. The current paper utilises the Object-Based Image Analysis (OBIA) approach of Definiens Imaging Developer software, in combination with very high spatial resolution (VHSR) colour infra-red (CIR) and visible-band (RGB) aerial photography in order to detect Japanese Knotweed s.l. taxa in Wales (UK). An algorithm was created using Definiens in order to detect these taxa, using variables found to effectively distinguish them from landscape and vegetation features. The results of the detection algorithm were accurate, as confirmed by field validation and desk-based studies. Further, these results may be incorporated into Geographical Information Systems (GIS) research as they are readily transferable as vector polygons (shapefiles). The successful detection results developed within the Definiens software should enable greater management and control efficacy. Further to this, the basic principles of the detection process could enable detection of these taxa worldwide, given the (relatively) limited technical requirements necessary to conduct further analyses.

  19. a

    GSMNP

    • hub.arcgis.com
    Updated Jan 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Fish & Wildlife Service (2024). GSMNP [Dataset]. https://hub.arcgis.com/maps/fws::gsmnp-1
    Explore at:
    Dataset updated
    Jan 12, 2024
    Dataset authored and provided by
    U.S. Fish & Wildlife Service
    Area covered
    Description

    Source DataThe National Agriculture Imagery Program (NAIP) Color Infrared Imagery, captured in 2018 Processing Methodsdownloaded NAIP imagery tiles for all Southern Appalachian sky islands with spruce forest type present. Mosaiced individual imagery tiles by sky island. This step resulted in a single, seamless imagery raster dataset for each sky island.Changed the raster band combination of the mosaiced sky island imagery to visually enhance the spruce forest type from the other forest types. Typically, the band combination was Band 2 for Red, Band 3 for Green, and Band 1 for Blue. Utilizing the ArcGIS Pro Image Analyst extension, performed an image segmentation of the mosaiced sky island imagery. Segmentation is a process in which adjacent pixels with similar multispectral or spatial characteristics are grouped together. These objects represent partial or complete features on the landscape. In this case, it simplified the imagery to be more uniform by forest type present in the imagery, especially for the spruce forest type.Utilizing the segmented mosaiced sky island imagery, training samples were digitized. Training samples are areas in the imagery that contain representative sites of a classification type that are used to train the imagery classification. Adequate training samples were digitized for every classification type required for the imagery classification. The spruce forest type was included for every sky island. Classified the segmented mosaiced sky island imagery utilizing a Support Vector Machine (SVM) classifier. The SVM provides a powerful, supervised classification method that is less susceptible to noise, correlated bands, and an unbalanced number or size of training sites within each class and is widely used among researchers. This step took the segmented mosaiced sky island imagery and created a classified raster dataset based on the training sample classification scheme. Reclassified the classified dataset only retaining the spruce forest type and shadows class.Converted the spruce and shadows raster dataset to polygon.

  20. a

    Solar Panel Detection - New Zealand

    • digital-earth-pacificcore.hub.arcgis.com
    • pacificgeoportal.com
    • +1more
    Updated Jan 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eagle Technology Group Ltd (2023). Solar Panel Detection - New Zealand [Dataset]. https://digital-earth-pacificcore.hub.arcgis.com/items/23d46b7e7f7d41abae01885b64834af8
    Explore at:
    Dataset updated
    Jan 13, 2023
    Dataset authored and provided by
    Eagle Technology Group Ltd
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This New Zealand solar panel detection Deep Learning Package can detect solar panels from high resolution imagery. This model is trained on high resolution imagery from New Zealand.Solar power is environmentally friendly and is being promoted by government agencies and power distribution companies. Government agencies can use solar panel detection to offer incentives such as tax exemptions and credits to residents who have installed solar panels. Policymakers can use it to gauge adoption and frame schemes to spread awareness and promote solar power utilization in areas that lack its use. This information can also serve as an input to solar panel installation and utility companies and help redirect their marketing efforts.Traditional ways of obtaining information on solar panel installation, such as surveys and on-site visits, are time consuming and error-prone. Deep learning models are highly capable of learning complex semantics and can produce superior results. Use this deep learning model to automate the task of solar panel detection, reducing time and effort required significantly.Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS ProArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.When using the Detect Objects using Deep Learning geoprocessing tool, ticking the Non Maximum Suppression box is recommended, for reference a Max Overlap Ratio of 0.3 was used for the example images below. Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputHigh resolution (7.5 cm) RGB imagery.OutputFeature class containing detected solar panels.Applicable geographiesThe model is expected to work well in New Zealand.Model architectureThis model uses the MaskRCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 0.83.Sample resultsSome results from the model are displayed below: To learn how to use this model, see this story

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri Imagery Virtual Team (2023). Damage Classification Deep Learning Model for Vexcel Imagery- Maui Fires [Dataset]. https://hub.arcgis.com/content/30e3f11be84b418fa4dcb109a1eac6d6
Organization logo

Damage Classification Deep Learning Model for Vexcel Imagery- Maui Fires

Explore at:
Dataset updated
Aug 18, 2023
Dataset provided by
Esrihttp://esri.com/
Authors
Esri Imagery Virtual Team
Area covered
Maui
Description

Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS ProArcGIS Enterprise – ArcGIS Image Server with raster analytics configuredArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelBefore using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.Input1. 8-bit, 3-band high-resolution (10 cm) imagery. The model was trained on 10 cm Vexcel imagery2. Building footprints feature classOutputFeature class containing classified building footprints. Classname field value 1 indicates damaged buildings, and value 2 corresponds to undamaged structuresApplicable geographiesThe model was specifically trained and tested over Maui, Hawaii, in response to the Maui fires in August 2023.Accuracy metricsThe model has an average accuracy of 0.96.Sample resultsResults of the models can be seen in this dashboard.

Search
Clear search
Close search
Google apps
Main menu