Land cover describes the surface of the earth. Land-cover maps are useful in urban planning, resource management, change detection, agriculture, and a variety of other applications in which information related to the earth's surface is required. Land-cover classification is a complex exercise and is difficult to capture using traditional means. Deep learning models are highly capable of learning these complex semantics and can produce superior results.There are a few public datasets for land cover, but the spatial and temporal coverage of these public datasets may not always meet the user’s requirements. It is also difficult to create datasets for a specific time, as it requires expertise and time. Use this deep learning model to automate the manual process and reduce the required time and effort significantly.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.Input8-bit, 3-band very high-resolution (10 cm) imagery.OutputClassified raster with the 8 classes as in the LA county landcover dataset.Applicable geographiesThe model is expected to work well in the United States and will produce the best results in the urban areas of California.Model architectureThis model uses the UNet model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an overall accuracy of 84.8%. The table below summarizes the precision, recall and F1-score of the model on the validation dataset: ClassPrecisionRecallF1 ScoreTree Canopy0.8043890.8461520.824742Grass/Shrubs0.7199930.6272780.670445Bare Soil0.89270.9099580.901246Water0.9808850.9874990.984181Buildings0.9222020.9450320.933478Roads/Railroads0.8696370.8629210.866266Other Paved0.8114650.8119610.811713Tall Shrubs0.7076740.6382740.671185Training dataThis model has been trained on very high-resolution Landcover dataset (produced by LA County).LimitationsSince the model is trained on imagery of urban areas of LA County it will work best in urban areas of California or similar geography.Model is trained on limited classes and may lead to misclassification for other types of LULC classes.Sample resultsHere are a few results from the model.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Metadata: NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 1b RadiancesMore information about this imagery can be found here.This satellite imagery combines data from the NOAA GOES East and West satellites and the JMA Himawari satellite, providing full coverage of weather events for most of the world, from the west coast of Africa west to the east coast of India. The tile service updates to the most recent image every 10 minutes at 1.5 km per pixel resolution.The infrared (IR) band detects radiation that is emitted by the Earth’s surface, atmosphere and clouds, in the “infrared window” portion of the spectrum. The radiation has a wavelength near 10.3 micrometers, and the term “window” means that it passes through the atmosphere with relatively little absorption by gases such as water vapor. It is useful for estimating the emitting temperature of the Earth’s surface and cloud tops. A major advantage of the IR band is that it can sense energy at night, so this imagery is available 24 hours a day.The Advanced Baseline Imager (ABI) instrument samples the radiance of the Earth in sixteen spectral bands using several arrays of detectors in the instrument’s focal plane. Single reflective band ABI Level 1b Radiance Products (channels 1 - 6 with approximate center wavelengths 0.47, 0.64, 0.865, 1.378, 1.61, 2.25 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for visible and near-infrared (IR) bands. Single emissive band ABI L1b Radiance Products (channels 7 - 16 with approximate center wavelengths 3.9, 6.185, 6.95, 7.34, 8.5, 9.61, 10.35, 11.2, 12.3, 13.3 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for IR bands. Detector samples are compressed, packetized and down-linked to the ground station as Level 0 data for conversion to calibrated, geo-located pixels (Level 1b Radiance data). The detector samples are decompressed, radiometrically corrected, navigated and resampled onto an invariant output grid, referred to as the ABI fixed grid.McIDAS merge technique and color mapping provided by the Cooperative Institute for Meteorological Satellite Studies (Space Science and Engineering Center, University of Wisconsin - Madison) using satellite data from SSEC Satellite Data Services and the McIDAS visualization software.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The professional map services market is experiencing robust growth, driven by increasing demand across diverse sectors. The market, estimated at $50 billion in 2025, is projected to maintain a healthy Compound Annual Growth Rate (CAGR) of 10% through 2033, reaching approximately $120 billion by the end of the forecast period. This expansion is fueled by several key factors, including the proliferation of location-based services (LBS), the rise of autonomous vehicles, and the growing need for precise mapping solutions in urban planning, logistics, and infrastructure development. Furthermore, advancements in technologies like GIS (Geographic Information Systems), satellite imagery, and AI-powered mapping tools are significantly enhancing map accuracy, detail, and functionality, further stimulating market demand. Major players like Google, TomTom, Esri, and Mapbox are continuously innovating, pushing the boundaries of map creation and application, and fostering competition that ultimately benefits consumers and businesses. The market segmentation reveals significant opportunities within specialized applications. High-resolution imagery and 3D mapping solutions are witnessing particularly strong growth, driven by increasing investments in infrastructure projects and the adoption of smart city initiatives. While data security and privacy concerns pose potential restraints, the industry is actively addressing these challenges through the development of secure mapping platforms and data encryption techniques. Regional growth is expected to be uneven, with North America and Europe maintaining significant market shares due to their advanced technological infrastructure and high adoption rates. However, emerging economies in Asia-Pacific are showing promising growth potential, fuelled by rapid urbanization and expanding digital infrastructure.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1939 Digital Aerial Photography Contributor: Rhode Island Department of Administration, Statewide Planning Program This map service features scanned, georeferenced historical aerial photography collected in May of 1939. The scanned images are panchromatic (black and white) and have a spatial resolution of approximately 4 feet. While the original prints are archived by the Rhode Island Statewide Planning Program, the scanned images are available from the Rhode Island Geographic Information System (RIGIS) consortium (https://rigis.org)Users can download these images.visit this page and download them all. use this imagery Download App to choose the ones you needThese download in zipped MrSID format. Web services available:ArcGIS Online hosted tile layer ArcGIS map service (REST endpoint)Metadata
This series of products from MODIS represents the only daily global composites available and is suitable for use at global and regional levels. This True Color band composition (Bands 1 4 3 | Red, Green, Blue) most accurately shows how we see the earth’s surface with our own eyes. It is a natural looking image that is useful for land surface, oceanic and atmospheric analysis. This map shows the 250 meter corrected reflectance product from both satellites that carry a MODIS, Aqua and Terra. Although the resolution is coarser than other satellites, this allows for a global collection of imagery on a daily basis, which is made available in near real-time. In contrast, Landsat needs 16 days to collect a global composite. Besides the maximum resolution difference, the surface and corrected reflectance products also differ in the algorithm used for atmospheric correction.NASA Global Imagery Browse Services (GIBS)This image layer provides access to a subset of the NASA Global Imagery Browse Services (GIBS), which are a set of standard services to deliver global, full-resolution satellite imagery. The GIBS goal is to enable interactive exploration of NASA's Earth imagery for a broad range of users. The purpose of this image layer, and the other GIBS image services hosted by Esri, is to enable convenient access to this beautiful and useful satellite imagery for users of ArcGIS. The source data used by this image layer is a finished image; it is not recommended for quantitative analysis.Several full resolution, global imagery products are built and served by GIBS in near real-time (usually within 3.5 hours of observation). These products are built from NASA Earth Observing System satellites data courtesy of LANCE data providers and other sources. The MODIS instrument aboard Terra and Aqua satellites, the AIRS instrument aboard Aqua, and the OMI instrument aboard Aura are used as sources. Several of the MODIS global products are made available on this Esri hosted service.This image layer hosted by Esri provides direct access to one of the GIBS image products. The Esri servers do not store any of this data itself. Instead, for each received data request, multiple image tiles are retrieved from GIBS, which are then processed and assembled into the proper image for the response. This processing takes place on-the-fly, for each and every request. This ensures that any update to the GIBS data is immediately available in the Esri mosaic service.Note on Time: The image service supporting this map is time enabled, but time has been disabled on this image layer so that the most recent imagery displays by default. If you would like to view imagery over time, you can update the layer properties to enable time animation and configure time settings. The results can be saved in a web map to use later or share with others.
This deep learning model is used to detect palm trees in high resolution drone or aerial imagery. Palm trees detection can be used for creating an inventory of palm trees, monitoring their health and location, and predicting the yield of palm oil, etc. High resolution aerial and drone imagery can be used for palm tree detection due to its high spatio-temporal coverage.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputHigh resolution RGB imagery (5 - 15 centimeter spatial resolution).OutputFeature class containing detected palm trees.Applicable geographiesThe model is expected to work well globally.Model architectureThis model uses the FasterRCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 75 percent.Training dataThis model has been trained on an Esri proprietary palm tree detection dataset.Sample resultsHere are a few results from the model. To view more, see this story.
Vegetative Difference Image gives an easy to interpret visual representation of vegetative increase/decrease across 2 time periods.This raster function template is used to generate a visual product. The results cannot be used for analysis. This templates generates an NDVI in the backend, hence it requires your imagery to have the red and near infrared bands. In the resulting image, greens indicate increase in vegetation, while the magenta indicates decrease in vegetationReferences:Raster functionsWhen to use this raster function templateThis template is particularly useful when trying to intuitively visualize the increase or decrease in vegetation over two time periods. How to use this raster function templateIn ArcGIS Pro, search ArcGIS Living Atlas for raster function templates to apply them to your imagery layer. You can also download the raster function template, attach it to a mosaic dataset, and publish it as an image service. This index supports many satellite sensors, such as Landsat-8, Sentinel-2, Quickbird, IKONOS, Geoeye-1, and Pleiades-1.Applicable geographiesThe template uses a standard vegetation which is designed to work globally.
An aerial imagery basemap of New Zealand in NZTM using the latest quality data from Land Information New Zealand.LINZ Aerial Imagery Basemap - NZTM WMTS: https://ecan.maps.arcgis.com/home/item.html?id=39cf07ebf8a2413696d8fd4d80570b84This basemap is also available in Web Mercator (WGS 84) from: https://basedatanz.maps.arcgis.com/home/item.html?id=a4ac021a9f6d4976bfb3cc6d34739b8bThe LINZ Aerial Imagery Basemap details New Zealand in high resolution - from a nationwide view all the way down to individual buildings.This basemap combines the latest high-resolution aerial imagery down to 5cm in urban areas and 10m satellite imagery to provide full coverage of mainland New Zealand, Chathams and other offshore islands.LINZ Basemaps are powered by data from the LINZ Data Service and other authoritative open data sources, providing you with a basemap that is free to use under an open licence.A XYZ tile API (Web Mercator only) is also available for use in web and mobile applications.See more information or provide your feedback at https://basemaps.linz.govt.nz/.For attribution requirements and data sources see: https://www.linz.govt.nz/data/linz-data/linz-basemaps/data-attribution.
Elephants are the largest terrestrial living species. They are herbivorous animals and require 100 kilograms to 200 kilograms of food and about 230 liters of water each day. Their home range can expand up to 11,000 square kilometers. Their ability to find food and water sources is gained from traditional knowledge learned over generations. This knowledge, which is important for survival, is lost if elder elephants of the herd perish.Elephants are endangered due to many reasons. They can be killed by poachers for their tusks, or captured and tamed for social status and for the circus. Changes in environment such as global warming, rain patterns, deforestation, and mining can lead to degradation of their habitat, forcing these animals to move to different areas in search of food and water. This can cause conflicts with humans as elephants move into human settlements and farmlands. They can also run into electrical fences and traps.To avoid life-threatening incidents, and for their conservation, monitoring the elephants and their movements is of high importance. It is easier to monitor the elephants using aerial imagery, as it does not require human intervention or disturbance in elephants' natural habitat. Elephant Detection using aerial imagery is more efficient when performed over vast areas. This deep learning model helps automate the task of detecting elephants from high-resolution aerial imagery.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputHigh resolution RGB imagery (3-13 centimeters spatial resolution).OutputFeature class containing detected elephants.Applicable geographiesThe model is expected to work well with aerial imagery of southern African forests (South Africa, Botswana, and Namibia) or similar geographies.Model architectureThis model uses the FasterRCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 0.857 for elephant.Training dataThe model has been trained on the The Aerial Elephant Dataset.LimitationsThis model works well only with high-resolution aerial imagery.This model is trained on imagery of African Bush Elephants. However, it detects all kinds of elephants and is species agnosticSample resultsHere are a few result from the model.CitationsNaudé, Johannes J., & Joubert, Deon. (2019). The Aerial Elephant Dataset [Dataset]. Zenodo.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The digital map market, currently valued at $25.55 billion in 2025, is experiencing robust growth, projected to expand at a compound annual growth rate (CAGR) of 13.39% from 2025 to 2033. This expansion is fueled by several key factors. The increasing adoption of location-based services (LBS) across various sectors, including transportation, logistics, and e-commerce, is a primary driver. Furthermore, the proliferation of smartphones and connected devices, coupled with advancements in GPS technology and mapping software, continues to fuel market growth. The rising demand for high-resolution, real-time mapping data for autonomous vehicles and smart city initiatives also significantly contributes to market expansion. Competition among established players like Google, TomTom, and ESRI, alongside emerging innovative companies, is fostering continuous improvement in map accuracy, functionality, and data accessibility. This competitive landscape drives innovation and lowers costs, making digital maps increasingly accessible to a broader range of users and applications. However, market growth is not without its challenges. Data security and privacy concerns surrounding the collection and use of location data represent a significant restraint. Ensuring data accuracy and maintaining up-to-date map information in rapidly changing environments also pose operational hurdles. Regulatory compliance with differing data privacy laws across various jurisdictions adds another layer of complexity. Despite these challenges, the long-term outlook for the digital map market remains positive, driven by the relentless integration of location intelligence into nearly every facet of modern life, from personal navigation to complex enterprise logistics solutions. The market's segmentation (although not explicitly provided) likely includes various map types (e.g., road maps, satellite imagery, 3D maps), pricing models (subscriptions, one-time purchases), and industry verticals served. This diversified market structure further underscores its resilience and potential for sustained growth. Recent developments include: December 2022 - The Linux Foundation has partnered with some of the biggest technology companies in the world to build interoperable and open map data in what is an apparent move t. The Overture Maps Foundation, as the new effort is called, is officially hosted by the Linux Foundation. The ultimate aim of the Overture Maps Foundation is to power new map products through openly available datasets that can be used and reused across applications and businesses, with each member throwing their data and resources into the mix., July 27, 2022 - Google declared the launch of its Street View experience in India in collaboration with Genesys International, an advanced mapping solutions company, and Tech Mahindra, a provider of digital transformation, consulting, and business re-engineering solutions and services. Google, Tech Mahindra, and Genesys International also plan to extend this to more than around 50 cities by the end of the year 2022.. Key drivers for this market are: Growth in Application for Advanced Navigation System in Automotive Industry, Surge in Demand for Geographic Information System (GIS); Increased Adoption of Connected Devices and Internet. Potential restraints include: Growth in Application for Advanced Navigation System in Automotive Industry, Surge in Demand for Geographic Information System (GIS); Increased Adoption of Connected Devices and Internet. Notable trends are: Surge in Demand for GIS and GNSS to Influence the Adoption of Digital Map Technology.
This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071
This app is part of Indicators of the Planet. Please see https://livingatlas.arcgis.com/indicatorsThis layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP and NOAA-20 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireNote about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The geospatial solutions market is experiencing robust growth, projected to reach a substantial market size of $348.8 billion in 2025. While the provided CAGR is missing, considering the rapid advancements in technologies like AI, IoT, and cloud computing driving the adoption of geospatial solutions across diverse sectors, a conservative estimate of the Compound Annual Growth Rate (CAGR) for the forecast period (2025-2033) would be around 8%. This growth is fueled by increasing demand for precise location intelligence in various applications, including utility management, business operations optimization, advanced transportation systems, defense and intelligence initiatives, infrastructure development, and natural resource exploration. The market is segmented by hardware, software, services, and applications, each showing significant growth potential. Hardware components like GPS receivers and sensors are witnessing strong demand, while software solutions are expanding in sophistication to incorporate advanced analytics and AI-powered capabilities. The services segment comprises data acquisition, processing, and analysis, fueling the industry's overall growth trajectory. Key players such as HERE Technologies, Esri, and Hexagon are driving innovation and market expansion through strategic partnerships and technological advancements. The geographic distribution shows strong demand across North America, Europe, and Asia-Pacific, with the latter expected to emerge as a key growth driver due to rapid urbanization and infrastructure development. The market's sustained growth is expected to continue into the future, driven by several factors. The increasing availability of high-resolution satellite imagery and aerial data, coupled with advancements in data processing and analytics, is empowering businesses and governments to make more informed decisions based on location-specific insights. Moreover, the growing adoption of cloud-based geospatial platforms is reducing the cost and complexity of implementing geospatial solutions, further stimulating market expansion. Challenges such as data privacy concerns and the need for skilled professionals to handle complex geospatial data remain; however, the overall growth outlook for the geospatial solutions market remains highly positive, promising significant opportunities for stakeholders across the value chain.
This layer presents detectable thermal activity from MODIS satellites for the last 7 days. MODIS Global Fires is a product of NASA’s Earth Observing System Data and Information System (EOSDIS), part of NASA's Earth Science Data. EOSDIS integrates remote sensing and GIS technologies to deliver global MODIS hotspot/fire locations to natural resource managers and other stakeholders around the World.Consumption Best Practices:
As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable.Source: NASA FIRMS - Active Fire Data - for WorldScale/Resolution: 1kmUpdate Frequency: 1/2 Hour (every 30 minutes) using the Aggregated Live Feed MethodologyArea Covered: WorldWhat can I do with this layer?The MODIS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Additional InformationMODIS stands for MODerate resolution Imaging Spectroradiometer. The MODIS instrument is on board NASA’s Earth Observing System (EOS) Terra (EOS AM) and Aqua (EOS PM) satellites. The orbit of the Terra satellite goes from north to south across the equator in the morning and Aqua passes south to north over the equator in the afternoon resulting in global coverage every 1 to 2 days. The EOS satellites have a ±55 degree scanning pattern and orbit at 705 km with a 2,330 km swath width.It takes approximately 2 – 4 hours after satellite overpass for MODIS Rapid Response to process the data, and for the Fire Information for Resource Management System (FIRMS) to update the website. Occasionally, hardware errors can result in processing delays beyond the 2-4 hour range. Additional information on the MODIS system status can be found at MODIS Rapid Response.Attribute InformationLatitude and Longitude: The center point location of the 1km (approx.) pixel flagged as containing one or more fires/hotspots (fire size is not 1km, but variable). Stored by Point Geometry. See What does a hotspot/fire detection mean on the ground?Brightness: The brightness temperature measured (in Kelvin) using the MODIS channels 21/22 and channel 31.Scan and Track: The actual spatial resolution of the scanned pixel. Although the algorithm works at 1km resolution, the MODIS pixels get bigger toward the edge of the scan. See What does scan and track mean?Date and Time: Acquisition date of the hotspot/active fire pixel and time of satellite overpass in UTC (client presentation in local time). Stored by Acquisition Date.Acquisition Date: Derived Date/Time field combining Date and Time attributes.Satellite: Whether the detection was picked up by the Terra or Aqua satellite.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel.Version: Version refers to the processing collection and source of data. The number before the decimal refers to the collection (e.g. MODIS Collection 6). The number after the decimal indicates the source of Level 1B data; data processed in near-real time by MODIS Rapid Response will have the source code “CollectionNumber.0”. Data sourced from MODAPS (with a 2-month lag) and processed by FIRMS using the standard MOD14/MYD14 Thermal Anomalies algorithm will have a source code “CollectionNumber.x”. For example, data with the version listed as 5.0 is collection 5, processed by MRR, data with the version listed as 5.1 is collection 5 data processed by FIRMS using Level 1B data from MODAPS.Bright.T31: Channel 31 brightness temperature (in Kelvins) of the hotspot/active fire pixel.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: The standard processing algorithm uses the solar zenith angle (SZA) to threshold the day/night value; if the SZA exceeds 85 degrees it is assigned a night value. SZA values less than 85 degrees are assigned a day time value. For the NRT algorithm the day/night flag is assigned by ascending (day) vs descending (night) observation. It is expected that the NRT assignment of the day/night flag will be amended to be consistent with the standard processing.Hours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.RevisionsJune 22, 2022: Added 'HOURS_OLD' field to enhance Filtering data. Added 'Last 7 days' Layer to extend data to match time range of VIIRS offering. Added Field level descriptions.This map is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The Argentina Satellite Imagery Services market, valued at $40 million in 2025, is projected to experience robust growth, driven by increasing government investments in infrastructure development, particularly within the geospatial data acquisition and mapping sectors. The rising demand for precise location intelligence across various applications, including natural resource management, surveillance and security, and disaster management, further fuels market expansion. Key applications like precision agriculture and urban planning are also contributing to market growth, as businesses and government agencies leverage satellite imagery for improved decision-making and resource optimization. The presence of established players like ESRI and Airbus, alongside emerging local firms, indicates a competitive yet dynamic market landscape. However, challenges remain, primarily concerning data accessibility, affordability for smaller businesses, and potential regulatory hurdles related to data privacy and security. The construction, transportation, and logistics sectors are expected to witness significant growth in satellite imagery adoption due to the need for efficient infrastructure planning and risk mitigation. Furthermore, the expanding military and defense applications are expected to contribute to market expansion throughout the forecast period. While specific data for Argentina's market segmentation is unavailable, the overall market trajectory mirrors global trends, projecting a Compound Annual Growth Rate (CAGR) of 6.66% from 2025 to 2033. This growth is expected to be further fueled by technological advancements in satellite imagery resolution and analytics. The consistent 6.66% CAGR signifies a steady increase in demand for advanced geospatial solutions. Government initiatives promoting digitalization and smart city development are key catalysts, driving adoption across various sectors. While the market faces challenges, such as high initial investment costs for technology and infrastructure, the long-term benefits of improved decision-making and operational efficiencies outweigh these barriers. The market is expected to mature gradually, with a shift towards cloud-based solutions and advanced analytics becoming increasingly prevalent. The presence of both international and domestic players ensures a competitive market fostering innovation and affordability. This combination of factors positions Argentina's satellite imagery services market for sustained growth in the coming years. Recent developments include: July 2023: Maxar Technologies, a leading provider of comprehensive space services and secure, precise geospatial intelligence, announced the initial launch of its innovative Maxar Geospatial Platform (MGP). This groundbreaking platform offers rapid and user-friendly access to the world's most advanced Earth intelligence. MGP is set to revolutionize geospatial data and analytics by simplifying discovery, procurement, and integration processes. Users of MGP will enjoy seamless access to Maxar's renowned geospatial content, which includes high-resolution satellite imagery, breathtaking imagery base maps, intricate 3D models, analysis-ready datasets, as well as image-based change detection and analytical outputs., March 2023: The Argentinean remote sensing constellation SAOCOM has contributed invaluable data, and the European Space Agency (ESA) engaged Earth observation experts to explore and propose innovative applications for this dataset. The Argentine space agency CONAE, responsible for overseeing and controlling the SAOCOM satellites, is actively working on requests for data delivery proposals. The SAOCOM mission, an integral part of ESA's Third Party Missions program, features two spacecraft, SAOCOM 1A and 1B, designed to collect polarimetric L-band synthetic aperture radar data., , . Key drivers for this market are: Increasing Adoption of Location-based Services, Satellite data usage is increasing. Potential restraints include: Increasing Adoption of Location-based Services, Satellite data usage is increasing. Notable trends are: Natural Resource Management is Expected to Significant Share.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a fine-tuned model for New Zealand, derived from a pre-trained model from Esri. It has been trained using LINZ aerial imagery (0.075 m spatial resolution) for Wellington You can see its output in this app https://niwa.maps.arcgis.com/home/item.html?id=1ca4ee42a7f44f02a2adcf198bc4b539Solar power is environment friendly and is being promoted by government agencies and power distribution companies. Government agencies can use solar panel detection to offer incentives such as tax exemptions and credits to residents who have installed solar panels. Policymakers can use it to gauge adoption and frame schemes to spread awareness and promote solar power utilization in areas that lack its use. This information can also serve as an input to solar panel installation and utility companies and help redirect their marketing efforts.Traditional ways of obtaining information on solar panel installation, such as surveys and on-site visits, are time consuming and error-prone. Deep learning models are highly capable of learning complex semantics and can produce superior results. Use this deep learning model to automate the task of solar panel detection, reducing time and effort required significantly.Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS Proor ArcGIS Enterprise – ArcGIS Image Server with Raster Analytics configuredor ArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelFollow the Esri guide to using their USA Solar Panel detection model (https://www.arcgis.com/home/item.html?id=c2508d72f2614104bfcfd5ccf1429284). Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputHigh resolution (5-15 cm) RGB imageryOutputFeature class containing detected solar panelsApplicable geographiesThe model is expected to work well in New ZealandModel architectureThis model uses the MaskRCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 0.9244444449742635NOTE: Use at your own risk_Item Page Created: 2022-02-09 02:24 Item Page Last Modified: 2025-04-05 16:30Owner: NIWA_OpenData
This layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices:
As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP, NOAA-20, and NOAA-21 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1) or NOAA-21 satellite (2). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsMarch 7, 2024: Updated to include source data from NOAA-21 Satellite.September 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
These line shapefiles trace apparent topographic and air-photo lineaments in various counties in Colorado. It was made in order to identify possible fault and fracture systems that might be conduits for geothermal fluids, as part of a DOE reconnaissance geothermal exploration program. Geothermal fluids commonly utilize fault and fractures in competent rocks as conduits for fluid flow. Geothermal exploration involves finding areas of high near-surface temperature gradients, along with a suitable "plumbing system" that can provide the necessary permeability. Geothermal power plants can sometimes be built where temperature and flow rates are high. This line shapefile is an attempt to use desktop GIS to delineate possible faults and fracture orientations and locations in highly prospective areas prior to an initial site visit. Geochemical sampling and geologic mapping could then be centered around these possible faults and fractures. To do this, georeferenced topographic maps and aerial photographs were utilized in an existing GIS, using ESRI ArcMap 10.0 software. The USA_Topo_Maps and World_Imagery map layers were chosen from the GIS Server at server.arcgisonline.com, using a UTM Zone 13 NAD27 projection. This line shapefile was then constructed over that which appeared to be through-going structural lineaments in both the aerial photographs and topographic layers, taking care to avoid manmade features such as roads, fence lines, and utility right-of-ways. Still, it is unknown what actual features these lineaments, if they exist, represent. Although the shapefiles are arranged by county, not all areas within any county have been examined for lineaments. Work was focused on either satellite thermal infrared anomalies, known hot springs or wells, or other evidence of geothermal systems. Finally, lineaments may be displaced somewhat from their actual location, due to such factors as shadow effects with low sun angles in the aerial photographs. Credits: These lineament shapefile was created by Geothermal Development Associates, as part of a geothermal geologic reconnaissance performed by Flint Geothermal, LLC, of Denver Colorado. Use Limitation: These shapefiles were constructed as an aid to geothermal exploration in preparation for a site visit for field checking. We make no claims as to the existence of the lineaments, their location, orientation, and/or nature.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This image service features orthorectified aerial photographs collected April 29 - May 2, 2020 by NV5 Geospatial powered by Quantum Spatial, under contract to the University of Rhode Island. The source images are 4-band (RGBI: red, green, blue, near infrared), have a 6-inch spatial resolution, and were collected under leaf-off conditions.These images will be made available for traditional file download by RIGIS when resources are available.Metadata (not currently available)Web servicesArcGIS image service, WGS84 Web Mercator (EPSG 102700)ArcGIS image service, NAD83 RI State Plane feet (EPSG 3438)ArcGIS image service, NAD83 (Epoch 2011) RI State Plane feet (EPSG 6568)KMZ, WGS84 Web Mercator (EPSG 102700)Tile index shapefile (not currently available)Traditional file listing (not currently available)These data were contributed to the Rhode Island Geographic Information System (RIGIS; https://www.rigis.org) by a partnership between the University of Rhode Island Environmental Data Center, RI Department of Administration Division of Statewide Planning, and RI Department of Environmental Management.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The global satellite remote sensing software market is experiencing robust growth, driven by increasing demand across diverse sectors. While precise market size figures for 2025 aren't provided, considering a plausible CAGR of 10% (a conservative estimate given the technological advancements and expanding applications) and an assumed 2024 market size of $2 billion, we can project a 2025 market valuation of approximately $2.2 billion. This expansion is fueled by several key factors. Firstly, the agricultural sector is leveraging satellite imagery for precision farming, crop monitoring, and yield prediction, significantly enhancing efficiency and productivity. Secondly, advancements in water resource management are heavily reliant on remote sensing data for efficient irrigation and flood control. Furthermore, forest management and conservation efforts utilize this technology for deforestation monitoring and biodiversity assessment. The public sector, including government agencies and research institutions, is also a major consumer, relying on these tools for environmental monitoring, disaster response, and urban planning. The market is segmented by software type (open-source and non-open-source) and application, with non-open-source solutions currently commanding a larger share due to their advanced features and robust support. Growth is further propelled by continuous technological innovation leading to more sophisticated analytics capabilities and easier data accessibility. However, certain restraints hinder market expansion. High initial investment costs for software licenses and hardware can pose a significant barrier, particularly for smaller organizations. Furthermore, the need for specialized expertise to interpret and analyze the complex satellite data can limit widespread adoption. Data security and privacy concerns related to sensitive geographic information are also emerging challenges. Despite these limitations, the long-term outlook for the satellite remote sensing software market remains positive, fueled by ongoing technological advancements, increased government investments in space-based technologies, and the growing recognition of its importance in various sectors. The market is expected to continue its growth trajectory, creating opportunities for established players and new entrants alike. The diverse range of applications and continued integration with other technologies like AI and machine learning will significantly shape the future landscape of this market.
Land cover describes the surface of the earth. Land-cover maps are useful in urban planning, resource management, change detection, agriculture, and a variety of other applications in which information related to the earth's surface is required. Land-cover classification is a complex exercise and is difficult to capture using traditional means. Deep learning models are highly capable of learning these complex semantics and can produce superior results.There are a few public datasets for land cover, but the spatial and temporal coverage of these public datasets may not always meet the user’s requirements. It is also difficult to create datasets for a specific time, as it requires expertise and time. Use this deep learning model to automate the manual process and reduce the required time and effort significantly.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.Input8-bit, 3-band very high-resolution (10 cm) imagery.OutputClassified raster with the 8 classes as in the LA county landcover dataset.Applicable geographiesThe model is expected to work well in the United States and will produce the best results in the urban areas of California.Model architectureThis model uses the UNet model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an overall accuracy of 84.8%. The table below summarizes the precision, recall and F1-score of the model on the validation dataset: ClassPrecisionRecallF1 ScoreTree Canopy0.8043890.8461520.824742Grass/Shrubs0.7199930.6272780.670445Bare Soil0.89270.9099580.901246Water0.9808850.9874990.984181Buildings0.9222020.9450320.933478Roads/Railroads0.8696370.8629210.866266Other Paved0.8114650.8119610.811713Tall Shrubs0.7076740.6382740.671185Training dataThis model has been trained on very high-resolution Landcover dataset (produced by LA County).LimitationsSince the model is trained on imagery of urban areas of LA County it will work best in urban areas of California or similar geography.Model is trained on limited classes and may lead to misclassification for other types of LULC classes.Sample resultsHere are a few results from the model.