Sometimes a basic solid color for your map's labels and text just isn't going to cut it. Here is an ArcGIS Pro style with light and dark gradient fills and shadow/glow effects that you can apply to map text via the "Text fill symbol" picker in your label pane. Level up those labels! Make them look touchable. Glassy. Shady. Intriguing.Find a how-to here.Save this style, add it to your ArcGIS Pro project, then use it for any text (including labels).**UPDATE**I've added a symbol that makes text look like is being illuminated from below, casting a shadow upwards and behind. Pretty dramatic if you ask me. Here is an example:Happy Mapping! John Nelson
Inspired by the book, Dirkzwager’s Guide to the New-Waterway, Rotterdam, Dordrecht, Europoort and Botlek for 1978, this style re-creates its crisp modernist colors balanced with charming hand-drawn landcover features and incorporates the tangible variability of print ink and aged paper.I was shown a wonderful example, provided by Eelco Berghuis, which was a gift from his grandfather.So I sampled colors and created fill and line symbol features with a print-like texture and bleed. When applied (admittedly pretty haphazardly) to New York City (New Amsterdam), for example, the style looks like this...And here are the style elements that comprise it...Happy Harbor Mapping! John Nelson
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
vfillDL.zip
dems: LiDAR DTM data partitioned into training, three testing, and two validation datasets. Original DTM data were obtained from 3DEP (https://www.usgs.gov/3d-elevation-program) and the WV GIS Technical Center (https://wvgis.wvu.edu/) . extents: extents of the training, testing, and validation areas. These extents were defined by the researchers. vectors: vector features representing valley fills and partitioned into separate training, testing, and validation datasets. Extents were created by the researchers.
The Topography Toolbox has been updated and expanded for ArcGIS Pro. Tools calculate:McCune and Keon (2002) Heat Load IndexSlope Position ClassificationTopographic Convergence/Wetness IndexTopographic Position IndexMultiscale Topographic Position IndexHeight Above Nearest DrainageHeight Above RiverVector Ruggedness MeasureLocalized Vector Ruggedness MeasureWind Exposure/Shelter IndexHypsometric Integral
Pockmarks are defined as depressions on the seabed and are usually formed by fluid expulsions. Recently discovered, pockmarks along the Aquitaine slope within the French EEZ, were manually mapped although two semi-automated methods were tested without convincing results. In order to potentially highlight different groups and possibly discriminate the nature of the fluids involved in their formation and evolution, a morphological study was conducted, mainly based on multibeam data and in particular bathymetry from the marine expedition GAZCOGNE1, 2013. Bathymetry and seafloor backscatter data, covering more than 3200 km², were acquired with the Kongsberg EM302 ship-borne multibeam echosounder of the R/V Le Suroît at a speed of ~8 knots, operated at a frequency of 30 kHz and calibrated with ©Sippican shots. Precision of seafloor backscatter amplitude is +/- 1 dB. Multibeam data, processed using Caraibes (©IFREMER), were gridded at 15x15 m and down to 10x10 m cells, for bathymetry and seafloor backscatter, respectively. The present table includes 11 morphological attributes extracted from a Geographical Information System project (Mercator 44°N conserved latitude in WGS84 Datum) and additional parameters related to seafloor backscatter amplitudes. Pockmark occurrence with regards to the different morphological domains is derived from a morphological analysis manually performed and based on GAZCOGNE1 and BOBGEO2 bathymetric datasets. The pockmark area and its perimeter were calculated with the “Calculate Geometry” tool of Arcmap 10.2 (©ESRI) (https://desktop.arcgis.com/en/arcmap/10.3/manage-data/tables/calculating-area-length-and-other-geometric-properties.htm). A first method to calculate pockmark internal depth developed by Gafeira et al. was tested (Gafeira J, Long D, Diaz-Doce D (2012) Semi-automated characterisation of seabed pockmarks in the central North Sea. Near Surface Geophysics 10 (4):303-315, doi:10.3997/1873-0604.2012018). This method is based on the “Fill” function from the Hydrology toolset in Spatial Analyst Toolbox Arcmap 10.2 (©ESRI), (https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/fill.htm) which fills the closed depressions. The difference between filled bathymetry and initial bathymetry produces a raster grid only highlighting filled depressions. Thus, only the maximum filling values which correspond to the internal depths at the apex of the pockmark were extracted. For the second method, the internal pockmark depth was calculated with the difference between minimum and maximum bathymetry within the pockmark. Latitude and longitude of the pockmark centroid, minor and major axis lengths and major axis direction of the pockmarks were calculated inside each depression with the “Zonal Geometry as Table” tool from Spatial Analyst Toolbox in ArcGIS 10.2 (©ESRI) (https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/zonal-statistics.htm). Pockmark elongation was calculated as the ratio between the major and minor axis length. Cell count is the number of cells used inside each pockmark to calculate statistics (https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/zonal-geometry.htm). Cell count and minimum, maximum and mean bathymetry, slope and seafloor backscatter values were calculated within each pockmark with “Zonal Statistics as Table” tool from Spatial Analyst Toolbox in ArcGIS 10.2 (©ESRI). Slope was calculated from bathymetry with “Slope” function from Spatial Analyst Toolbox in ArcGIS 10.2 (©ESRI) and preserves its 15 m grid size (https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/slope.htm). Seafloor backscatter amplitudes (minimum, maximum and mean values) of the surrounding sediments were calculated within a 100 m buffer around the pockmark rim.
IntroductionIRWIN ArcGIS Online GeoPlatform Services The Integrated Reporting of Wildland-Fire Information (IRWIN) Production data is replicated every 60 seconds to the ArcGIS Online GeoPlatform organization so that read-only views can be provided for consumers. This replicated view is called the hosted datastore. The “IRWIN Data” group is a set of Feature Layer views based on the replicated IRWIN layers. These feature layers provide a near real-time feed of all valid IRWIN data. All incidents that have been shared through the integration service since May 20, 2014 are available through this service. The incident data provides the location of existing fires, size, conditions and several other attributes that help classify fires. The IRWIN Data service allows users to create a web map, share it with their organization, or pull it into ArcMap or ArcGIS Pro for more in-depth analysis.InstructionsTo allow the emergency management GIS staff to join the IRWIN Data group, they will need to set up an ArcGIS Online account through our account manager. Please send the response to Samantha Gibbes (Samantha.C.Gibbes@saic.com) and Kayloni Ahtong (kayloni_ahtong@ios.doi.gov). Use the below template and fill in each part as best as possible, where the point of contact (POC) is the person responsible for the account.Reply Email Body: The (name of application) application requests the following user account and access to the IRWIN Data group.POC Name: First name Last name and titlePOC Email: Username: <>_irwin (choose a username, something short, followed by _irwin)Business Justification: Once you are set up with the account, I will coordinate a call to go over any questions.
A style containing 34 assorted 3D people models for use in large-scale visualizations, providing vertical context.To Match Layer Symbology to Style in ArcGIS Pro, populate a person_type text field to match the values shown below. Next, copy these values to a table, then join the height value(s) to the people points for use in pop-ups or charts. person_type name height_m height_feet height_inches
Man 1 Gerald 1.7899 5 10.47
Man 2 Ethan 1.8879 6 2.33
Man 3 Cliff 1.7015 5 6.99
Man 4 Dustin 1.7965 5 10.73
Man 5 Jorge 1.8787 6 1.96
Man 6 Phillip 1.6752 5 5.95
Man 7 Dmitri 1.71 5 7.32
Man 8 Luke 1.793 5 10.59
Man 9 Carlos 1.7028 5 7.04
Man 10 Jimmy 1.7625 5 9.39
Man 11 Helmut 1.8331 6 0.17
Man 12 Guy 1.812 5 11.34
Man 13 Leon 1.8219 5 11.73
Man 14 Matthias 1.753 5 9.02
Man 15 Kendrick 1.8787 6 1.96
Man 16 Seth 1.8272 5 11.94
Man 17 Gomer 1.8982 6 2.73
Man 18 Robert 1.7853 5 10.29
Man 19 Jack 1.779 5 10.04
Man 20 Andy 1.8794 6 1.99
Man 21 Hamish 1.67 5 5.75
Man 22 Felix 1.86 6 1.23
Man 23 Adrian 1.75 5 8.90
Woman 1 Greta 1.5371 5 0.52
Woman 2 Simone 1.6366 5 4.43
Woman 3 Alison 1.679 5 6.10
Woman 4 Felicia 1.7433 5 8.63
Woman 5 Jessica 1.7322 5 8.20
Woman 6 Claire 1.6405 5 4.59
Woman 7 Maude 1.7795 5 10.06
Woman 8 Jenny 1.659 5 5.31
Woman 9 Diane 1.67 5 5.75
Woman 10 Carla 1.75 5 8.90
Woman 11 Lauren 1.69 5 6.54
This 3D model of Mount Saint Helens shows the topography using wood-textured contours set at 50m vertical spacing, with the darker wood grain color indicating the major contours at 1000, 1500, 2000, and 2500 meters above sea level. The state of the mountain before the eruption of May 13, 1980 is shown with thinner contours, allowing you to see the volume of rock that was ejected via the lateral blast.The process to create the contours uses CityEngine and ArcGIS Pro for data processing, symbolization, and publishing. The steps:Create a rectangular AOI polygon and use the Clip Raster tool on your local terrain raster. A 30m DEM was used for before, 10m for after.Run the Contour tool on the clipped raster, using the polygon output option - 50m was used for this scene.Run the Smooth Polygon tool on the contours. For Mount St. Helens, I used the PAEK algorithm, with a 200m smoothing tolerance. Depending on the resolution of the elevation raster and the extent of the AOI, a larger or smaller value may be needed. Write a CityEngine rule (see below) that extrudes and textures each contour polygon to create a stair-stepped 3D contour map. Provide multiple wood texture options with parameters for: grain size, grain rotation, extrusion height (to account for different contour depths if values other than 100m are used), and a hook for the rule to read the ContourMax attribute that is created by the Contour tool. Export CityEngine rule as a Rule Package (*.rpk).Add some extra features for context - a wooden planter box to hide some of the edges of the model, and water bodies.Apply the CityEngine-authored RPK to the contour polygons in ArcGIS Pro as a procedural fill symbol, adjust parameters for desired look & feel.Run Layer 3D to Feature Class tool to convert the procedural fill to multipatch features. Share Web SceneRather than create a more complicated CityEngine rule that applied textures for light/dark wood colors for minor/major contours, I just created a complete light- and dark-wood version of the mountain (and one with just the water), then shuffled them together.Depending on where this methodology is applied, you may want to clip out other areas - for example, glaciers, roads, or rivers. Or add annotation by inlaying a small north arrow in the corner of the map. I like the challenge of representing any feature in this scene in terms of wood colors and grains - some extruded, some recessed, some inlaid flat.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This layer contains the fire perimeters from the previous calendar year, and those dating back to 1878, for California. Perimeters are sourced from the Fire and Resource Assessment Program (FRAP) and are updated shortly after the end of each calendar year. Information below is from the FRAP web site. There is also a tile cache version of this layer.
About the Perimeters in this Layer
Initially CAL FIRE and the USDA Forest Service jointly developed a fire perimeter GIS layer for public and private lands throughout California. The data covered the period 1950 to 2001 and included USFS wildland fires 10 acres and greater, and CAL FIRE fires 300 acres and greater. BLM and NPS joined the effort in 2002, collecting fires 10 acres and greater. Also in 2002, CAL FIRE’s criteria expanded to include timber fires 10 acres and greater in size, brush fires 50 acres and greater in size, grass fires 300 acres and greater in size, wildland fires destroying three or more structures, and wildland fires causing $300,000 or more in damage. As of 2014, the monetary requirement was dropped and the damage requirement is 3 or more habitable structures or commercial structures.
In 1989, CAL FIRE units were requested to fill in gaps in their fire perimeter data as part of the California Fire Plan. FRAP provided each unit with a preliminary map of 1950-89 fire perimeters. Unit personnel also verified the pre-1989 perimeter maps to determine if any fires were missing or should be re-mapped. Each CAL FIRE Unit then generated a list of 300+ acre fires that started since 1989 using the CAL FIRE Emergency Activity Reporting System (EARS). The CAL FIRE personnel used this list to gather post-1989 perimeter maps for digitizing. The final product is a statewide GIS layer spanning the period 1950-1999.
CAL FIRE has completed inventory for the majority of its historical perimeters back to 1950. BLM fire perimeters are complete from 2002 to the present. The USFS has submitted records as far back as 1878. The NPS records date to 1921.
About the Program
FRAP compiles fire perimeters and has established an on-going fire perimeter data capture process. CAL FIRE, the United States Forest Service Region 5, the Bureau of Land Management, and the National Park Service jointly develop the fire perimeter GIS layer for public and private lands throughout California at the end of the calendar year. Upon release, the data is current as of the last calendar year.
The fire perimeter database represents the most complete digital record of fire perimeters in California. However it is still incomplete in many respects. Fire perimeter database users must exercise caution to avoid inaccurate or erroneous conclusions. For more information on potential errors and their source please review the methodology section of these pages.
The fire perimeters database is an Esri ArcGIS file geodatabase with three data layers (feature classes):
There are many uses for fire perimeter data. For example, it is used on incidents to locate recently burned areas that may affect fire behavior (see map left).
Other uses include:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Important Note: This item is in mature support as of February 2023 and will be retired in December 2025. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version. This layer displays change in pixels of the Sentinel-2 10m Land Use/Land Cover product developed by Esri, Impact Observatory, and Microsoft. Available years to compare with 2021 are 2018, 2019 and 2020. By default, the layer shows all comparisons together, in effect showing what changed 2018-2021. But the layer may be changed to show one of three specific pairs of years, 2018-2021, 2019-2021, or 2020-2021.Showing just one pair of years in ArcGIS Online Map ViewerTo show just one pair of years in ArcGIS Online Map viewer, create a filter. 1. Click the filter button. 2. Next, click add expression. 3. In the expression dialogue, specify a pair of years with the ProductName attribute. Use the following example in your expression dialogue to show only places that changed between 2020 and 2021:ProductNameis2020-2021By default, places that do not change appear as a
transparent symbol in ArcGIS Pro. But in ArcGIS Online Map Viewer, a transparent
symbol may need to be set for these places after a filter is
chosen. To do this:4. Click the styles button. 5. Under unique values click style options. 6. Click the symbol next to No Change at the bottom of the legend. 7. Click the slider next to "enable fill" to turn the symbol off.Showing just one pair of years in ArcGIS ProTo show just one pair of years in ArcGIS Pro, choose one of the layer's processing templates to single out a particular pair of years. The processing template applies a definition query that works in ArcGIS Pro. 1. To choose a processing template, right click the layer in the table of contents for ArcGIS Pro and choose properties. 2. In the dialogue that comes up, choose the tab that says processing templates. 3. On the right where it says processing template, choose the pair of years you would like to display. The processing template will stay applied for any analysis you may want to perform as well.How the change layer was created, combining LULC classes from two yearsImpact Observatory, Esri, and Microsoft used artificial intelligence to classify the world in 10 Land Use/Land Cover (LULC) classes for the years 2017-2021. Mosaics serve the following sets of change rasters in a single global layer: Change between 2018 and 2021Change between 2019 and 2021Change between 2020 and 2021To make this change layer, Esri used an arithmetic operation
combining the cells from a source year and 2021 to make a change index
value. ((from year * 16) + to year) In the example of the change between 2020 and 2021, the from year (2020) was multiplied by 16, then added to the to year (2021). Then the combined number is served as an index in an 8 bit unsigned mosaic with an attribute table which describes what changed or did not change in that timeframe. Variable mapped: Change in land cover between 2018, 2019, or 2020 and 2021 Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global LULC maps provide information on conservation planning, food security,
and hydrologic modeling, among other things. This dataset can be used to
visualize land cover anywhere on Earth. This
layer can also be used in analyses that require land cover input. For
example, the Zonal Statistics tools allow a user to understand the
composition of a specified area by reporting the total estimates for
each of the classes. Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world. The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map. Processing platformSentinel-2 L2A/B data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas
where water was predominantly present throughout the year; may not
cover areas with sporadic or ephemeral water; contains little to no
sparse vegetation, no rock outcrop nor built up features like docks;
examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny
significant clustering of tall (~15-m or higher) dense vegetation,
typically with a closed or dense canopy; examples: wooded vegetation,
clusters of dense tall vegetation within savannas, plantations, swamp or
mangroves (dense/tall vegetation with ephemeral water or canopy too
thick to detect water underneath).4. Flooded vegetationAreas
of any type of vegetation with obvious intermixing of water throughout a
majority of the year; seasonally flooded area that is a mix of
grass/shrub/trees/bare ground; examples: flooded mangroves, emergent
vegetation, rice paddies and other heavily irrigated and inundated
agriculture.5. CropsHuman
planted/plotted cereals, grasses, and crops not at tree height;
examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman
made structures; major road and rail networks; large homogenous
impervious surfaces including parking structures, office buildings and
residential housing; examples: houses, dense villages / towns / cities,
paved roads, asphalt.8. Bare groundAreas
of rock or soil with very sparse to no vegetation for the entire year;
large areas of sand and deserts with no to little vegetation; examples:
exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried
lake beds, mines.9. Snow/IceLarge
homogenous areas of permanent snow or ice, typically only in mountain
areas or highest latitudes; examples: glaciers, permanent snowpack, snow
fields. 10. CloudsNo land cover information due to persistent cloud cover.11. Rangeland Open
areas covered in homogenous grasses with little to no taller
vegetation; wild cereals and grasses with no obvious human plotting
(i.e., not a plotted field); examples: natural meadows and fields with
sparse to no tree cover, open savanna with few to no trees, parks/golf
courses/lawns, pastures. Mix of small clusters of plants or single
plants dispersed on a landscape that shows exposed soil or rock;
scrub-filled clearings within dense forests that are clearly not taller
than trees; examples: moderate to sparse cover of bushes, shrubs and
tufts of grass, savannas with very sparse grasses, trees or other
plants.CitationKarra,
Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep
learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote
Sensing Symposium. IEEE, 2021.AcknowledgementsTraining
data for this project makes use of the National Geographic Society
Dynamic World training dataset, produced for the Dynamic World Project
by National Geographic Society in partnership with Google and the World
Resources Institute.For questions please email environment@esri.com
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset consists of nitrogen dioxide, meteorological data, and traffic data from January to June 2019, which were generated taking into account the spatial distribution of the monitoring stations. Using the ArcGIS Pro software, a grid was created (Top -4,486,449.725263 m; Bottom - 4,466,449.725263 m; Left - 434,215.234430 m; Right - 451,215.234430 m) with a cell size having a width and height equal to 1000 m. There are 340 cells in total (20 by 17). Each cell value includes nitrogen dioxide, meteorological, and traffic attributes from assigned stations at a certain time. The cell value without stations was assigned to zero. The generated grid was exported as Comma Separated Values (CSV) files. Overall, 4,344 CSV files were generated every hour during the first six months of 2019. Meteorological data include ultraviolet radiation, wind speed, wind direction, temperature, relative humidity, barometric pressure, solar irradiance, and precipitation, traffic data includes intensity, occupation, load, and average speed. The datasets have an hourly rate. The data were obtained from the Open Data portal of the Madrid City Council. There are 24 air quality monitoring stations, 26 meteorological monitoring stations, and more than 4,000 traffic measurement points (the location of the measurement points was provided on a monthly basis as these points changed monthly).
Example extract, transform, and load (ETL) framework with comments and print statements to develop a script using the "run tools in Pro and copy script to a file" method to assist in NG911 transition by transforming and loading local addresses and road centerlines into the NENA Next Generation 9-1-1 GIS Data Model Schema. Created on 20220915 as a supplement to a "Supporting Extract, Transform, and Load Development for Next Generation 9-1-1" presentation delivered at GIS Pro 2022. Originally developed by Matt Gerike, Virginia Geographic Information Network, September 2022.Parity logic contributed by Charles Grant, City of Salem, Virginia, March 2021.See here for resources and context about using the NG9-1-1 GIS data model templates.Additional resources and recommendations on GIS related topics are available on the VGIN 9-1-1 & GIS page.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Rising sea levels (SLR) will cause coastal groundwater to rise in many coastal urban environments. Inundation of contaminated soils by groundwater rise (GWR) will alter the physical, biological, and geochemical conditions that influence the fate and transport of existing contaminants. These transformed products can be more toxic and/or more mobile under future conditions driven by SLR and GWR. We reviewed the vulnerability of contaminated sites to GWR in a US national database and in a case comparison with the San Francisco Bay region to estimate the risk of rising groundwater to human and ecosystem health. The results show that 326 sites in the US Superfund program may be vulnerable to changes in groundwater depth or flow direction as a result of SLR, representing 18.1 million hectares of contaminated land. In the San Francisco Bay Area, we found that GWR is predicted to impact twice as much coastal land area as inundation from SLR alone, and 5,297 state-managed sites of contamination may be vulnerable to inundation from GWR in a 1-meter SLR scenario. Increases of only a few centimeters of elevation can mobilize soil contaminants, alter flow directions in a heterogeneous urban environment with underground pipes and utility trenches, and result in new exposure pathways. Pumping for flood protection will elevate the salt water interface, changing groundwater salinity and mobilizing metals in soil. Socially vulnerable communities are more exposed to this risk at both the national scale and in a regional comparison with the San Francisco Bay Area. Methods Data Dryad This data set includes data from the California State Water Resources Control Board (WRCB), the California Department of Toxic Substances Control (DTSC), the USGS, the US EPA, and the US Census. National Assessment Data Processing: For this portion of the project, ArcGIS Pro and RStudio software applications were used. Data processing for superfund site contaminants in the text and supplementary materials was done in RStudio using R programming language. RStudio and R were also used to clean population data from the American Community Survey. Packages used include: Dplyr, data.table, and tidyverse to clean and organize data from the EPA and ACS. ArcGIS Pro was used to compute spatial data regarding sites in the risk zone and vulnerable populations. DEM data processed for each state removed any elevation data above 10m, keeping anything 10m and below. The Intersection tool was used to identify superfund sites within the 10m sea level rise risk zone. The Calculate Geometry tool was used to calculate the area within each coastal state that was occupied by the 10m SLR zone and used again to calculate the area of each superfund site. Summary Statistics were used to generate the total proportion of superfund site surface area / 10m SLR area for each state. To generate population estimates of socially vulnerable households in proximity to superfund sites, we followed methods similar to that of Carter and Kalman (2020). First, we generated buffers at the 1km, 3km, and 5km distance of superfund sites. Then, using Tabulate Intersection, the estimated population of each census block group within each buffer zone was calculated. Summary Statistics were used to generate total numbers for each state. Bay Area Data Processing: In this regional study, we compared the groundwater elevation projections by Befus et al (2020) to a combined dataset of contaminated sites that we built from two separate databases (Envirostor and GeoTracker) that are maintained by two independent agencies of the State of California (DTSC and WRCB). We used ArcGIS to manage both the groundwater surfaces, as raster files, from Befus et al (2020) and the State’s point datasets of street addresses for contaminated sites. We used SF BCDC (2020) as the source of social vulnerability rankings for census blocks, using block shapefiles from the US Census (ACS) dataset. In addition, we generated isolines that represent the magnitude of change in groundwater elevation in specific sea level rise scenarios. We compared these isolines of change in elevation to the USGS geological map of the San Francisco Bay region and noted that groundwater is predicted to rise farther inland where Holocene paleochannels meet artificial fill near the shoreline. We also used maps of historic baylands (altered by dikes and fill) from the San Francisco Estuary Institute (SFEI) to identify the number of contaminated sites over rising groundwater that are located on former mudflats and tidal marshes. The contaminated sites' data from the California State Water Resources Control Board (WRCB) and the Department of Toxic Substances (DTSC) was clipped to our study area of nine-bay area counties. The study area does not include the ocean shorelines or the north bay delta area because the water system dynamics differ in deltas. The data was cleaned of any duplicates within each dataset using the Find Identical and Delete Identical tools. Then duplicates between the two datasets were removed by running the intersect tool for the DTSC and WRCB point data. We chose this method over searching for duplicates by name because some sites change names when management is transferred from DTSC to WRCB. Lastly, the datasets were sorted into open and closed sites based on the DTSC and WRCB classifications which are shown in a table in the paper's supplemental material. To calculate areas of rising groundwater, we used data from the USGS paper “Projected groundwater head for coastal California using present-day and future sea-level rise scenarios” by Befus, K. M., Barnard, P., Hoover, D. J., & Erikson, L. (2020). We used the hydraulic conductivity of 1 condition (Kh1) to calculate areas of rising groundwater. We used the Raster Calculator to subtract the existing groundwater head from the groundwater head under a 1-meter of sea level rise scenario to find the areas where groundwater is rising. Using the Reclass Raster tool, we reclassified the data to give every cell with a value of 0.1016 meters (4”) or greater a value of 1. We chose 0.1016 because groundwater rise of that little can leach into pipes and infrastructure. We then used the Raster to Poly tool to generate polygons of areas of groundwater rise.
ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
This .zip file contains pre-configured files for members of the public to interact with Kendall County's public GIS layers in a desktop environment. Included are:An ArcGIS Pro PackageA QGIS Project FIleArcGIS Pro requires an ESRI license to use. See the ArcGIS Pro product page for more information.QGIS is free, open-source software that is available for a variety of computing environments. See the QGIS Downloads page to select the appropriate installation method.With the appropriate software installed, users can simply open the corresponding file. It may take a minute or two to load, due to the number of layers that need to load. Once loaded, users will have read-only access to all of the major public layers, and can adjust how they are displayed. In a desktop environment, users can also create and interact with other data sources, such as private site plans, annotations, and other public data layers from non-County entities.Please note that the layers included in these packages are the same live data sources found in the web maps. An internet connection is required for these files to function properly.
This topo data was generated from Lidar flown by USGS for the Salinas Watershed Basin in January through May 2018. Vertical accuracy is 10 cm. Point spacing is .7m. More information on the base Lidar data can be found at https://viewer.nationalmap.gov/basic/. https://inport.nmfs.noaa.gov/inport/item/48243Using ArcGIS Pro v 2.5, LAS files covering the City of Paso Robles were added to a Las Dataset (.lasd). Next, using the LAS Dataset to Raster tool, a DEM was created showing ground only with a cell size of 10, Interpolation Type of Binning, Cell Assignment of Average, and Void Fill Method of Linear. The DEM was next added to ArcMap 10.7 to utilize the spatial analyst license and where the Contour tool was used to create 5 ft. elevation contours. The contour data was next loaded back into ArcGIS Pro where the Smooth Line tool was used to smooth out the topo lines with a smoothing tolerance of 10 ft. and the Polynominal Approximation with Exponential Kernal (PEAK) smoothing algorithm. https://inport.nmfs.noaa.gov/inport/item/48243https://www.usgs.gov/core-science-systems/ngp/3dep/about-3dep-products-services
Data on aggregated radon test results in residential properties from January 1994 to December 2024 within each Vermont municipality. Radon data can inform public health outreach, educate stakeholders and the public, and encourage testing and mitigation. View this data in the Department of Health's radon risk map.Radon is a naturally occurring radioactive gas that is estimated to kill 50 Vermonters a year due to lung cancer. Radon can only be detected by testing and buildings with elevated radon levels (≥4 pCi/L (picocuries per Liter)) are found throughout the state. The average level of radon in Vermont homes is 2.4 pCi/L compared with the national average of 1.3 pCi/L. The EPA recommends that homes testing at or above 4 pCi/L be fixed, but as there is no known safe level of radon, the EPA suggests that homes testing between 2-4 pCi/L should also be fixed.This data set contains the Environmental Health Radon program’s radon in-air long term test data from 1994-2024, and the Vermont Department of Health Laboratory’s radon in-air short, medium, and long-term test data for 2020-2024. Data have been geocoded and aggregated to the town level to display the number and percent of residences tested by town and the number and percent of residences tested that exceed 4 pCi/L by town.Data SourceSource data for these maps comes from the highest radon test result ever found at a residence (many residences test more than once). Results are provided by the Radon Program long term test data (1994-2024) and the Vermont Department of Health Laboratory, short, medium, and long term test data (2020-2024). Radon results are aggregated by town based on whether the result was elevated (≥4.0 pCi/L) or not elevated (<4.0 pCi/L).Data LimitationsPrison, institutional residence, and nursing home E911 locations are not included in the aggregation of residences by town or geological area. For areas of low population density or low number of tests, data extremes carry more weight and can distort analytic results. Therefore, in the Rates of Radon Testing by Town, data for towns with fewer than 7 tested residences are not displayed; and in Elevated Radon Results, data for towns with fewer than 20 tested residences are not displayed.MethodologyRecord level radon in indoor air test results were extracted from the VDH-EH Radon database by Radon Program staff and from the LIMS system at the VDHL by laboratory staff. The Tracking analyst used SAS version 9.4 and ArcGIS Pro version 2.4.1 to process the data. Geocoded data from the Tracking program were used for the Radon Risk Maps. GIS work to populate the final maps was done collaboratively with partners from the Agency of Digital Services using ArcGIS Pro version 2.4.1.The residential data are from the VT Data – E911 Site Locations (address points) where the following were selected from the SITETYPE variable: mobile home, multi-family dwelling, other residential, single-family dwelling, residential farm, seasonal home, commercial with residence, condominium, and camp. The residential data in these maps is aggregated by town and geological area to provide the denominator for calculations.
Defra Network WMS server provided by the Environment Agency. See full dataset here.The Most Probable Overland Flow Pathway dataset is a polyline GIS vector dataset that describes the likely flow routes of water along with potential accumulations of diffuse pollution and soil erosion features over the land.It is a complete network for the entire country (England) produced from a hydro-enforced LIDAR 1-metre resolution digital terrain model (bare earth DTM) produced from the 2022 LIDAR Composite 1m Digital Terrain Model. Extensive processing on the data using auxiliary datasets (Selected OS Water Network, OS MasterMap features as well as some manual intervention) has resulted in a hydro-enforced DTM that significantly reduces the amount of non-real-world obstructions in the DTM. Although it does not consider infiltration potential of different land surfaces and soil types, it is instructive in broadly identifying potential problem areas in the landscape.The flow network is based upon theoretical one-hectare flow accumulations, meaning that any point along a network feature is likely to have a minimum of one-hectare of land potentially contributing to it. Each segment is attributed with an estimate of the mean slope along it.The product is comprised of 3 vector datasets; Probable Overland Flow Pathways, Detailed Watershed and Ponding and Errors. Where Flow Direction Grids have been derived, the D8 option was applied. All processing was carried out using ARCGIS Pro’s Spatial Analyst Hydrology tools. Outlined below is a description of each of the feature class.Probable Overland Flow Pathways The Probable Overland Flow Pathways layer is a polyline vector dataset that describes the probable locations accumulation of water over the Earth’s surface where it is assumed that there is no absorption of water through the soil. Every point along each of the features predicts an uphill contribution of a minimum of 1 hectare of land. The hydro-enforced LIDAR Digital Terrain Model 1-Metre Composite (2022) has been used to derive this data layer. Every effort has been used to digitally unblock real-world drainage features; however, some blockages remain (e.g. culverts and bridges. In these places the flow pathways should be disregarded. The Ponding field can be used to identify these erroneous pathways. They are flagged in the Ponding field with a “1”. Flow pathways are also attributed with a mean slope value which is calculated from the Length and the difference of the start and end point elevations. The maximum uphill flow accumulation area is also indicated for each flow pathway feature.Detailed Watersheds The Detailed Watersheds layer is a polygon vector dataset that describes theoretical catchment boundaries that have been derived from pour points extracted from every junction or node of a 1km2 Flow Accumulation dataset. The hydro-enforced LIDAR Digital Terrain Model 1-Metre Composite (2022) has been used to derive this data layer.Ponding Errors The Ponding and Errors layer is a polygon vector dataset that describes the presence of depressions in the landscape after the hydro-enforcing routine has been applied to the Digital Terrain Model. The Type field indicates whether the feature is Off-Line or On-Line. Off-Line is indicative of a feature that intersects with a watercourse and is likely to be an error in the Overland Flow pathways. On-line features do not intersect with watercourses and are more likely to be depressions in the landscape where standing water may accumulate. Only features of greater than 100m2 with a depth of greater than 20cm have been included. The layer was derived by filling the hydro-enforced DTM then subtracting the hydro-enforced DTM from the filled hydro-enforced DTM.Please use with caution in very flat areas and areas with highly modified drainage systems (e.g. fenlands of East Anglia and Somerset Levels). There will occasionally be errors associated with bridges, viaducts and culverts that were unable to be resolved with the hydro-enforcement process.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This layer contains the fire perimeters from the previous calendar year, and those dating back to 1878, for California. Perimeters are sourced from the Fire and Resource Assessment Program (FRAP) and are updated shortly after the end of each calendar year. Information below is from the FRAP web site. There is also a tile cache version of this layer.
About the Perimeters in this Layer
Initially CAL FIRE and the USDA Forest Service jointly developed a fire perimeter GIS layer for public and private lands throughout California. The data covered the period 1950 to 2001 and included USFS wildland fires 10 acres and greater, and CAL FIRE fires 300 acres and greater. BLM and NPS joined the effort in 2002, collecting fires 10 acres and greater. Also in 2002, CAL FIRE’s criteria expanded to include timber fires 10 acres and greater in size, brush fires 50 acres and greater in size, grass fires 300 acres and greater in size, wildland fires destroying three or more structures, and wildland fires causing $300,000 or more in damage. As of 2014, the monetary requirement was dropped and the damage requirement is 3 or more habitable structures or commercial structures.
In 1989, CAL FIRE units were requested to fill in gaps in their fire perimeter data as part of the California Fire Plan. FRAP provided each unit with a preliminary map of 1950-89 fire perimeters. Unit personnel also verified the pre-1989 perimeter maps to determine if any fires were missing or should be re-mapped. Each CAL FIRE Unit then generated a list of 300+ acre fires that started since 1989 using the CAL FIRE Emergency Activity Reporting System (EARS). The CAL FIRE personnel used this list to gather post-1989 perimeter maps for digitizing. The final product is a statewide GIS layer spanning the period 1950-1999.
CAL FIRE has completed inventory for the majority of its historical perimeters back to 1950. BLM fire perimeters are complete from 2002 to the present. The USFS has submitted records as far back as 1878. The NPS records date to 1921.
About the Program
FRAP compiles fire perimeters and has established an on-going fire perimeter data capture process. CAL FIRE, the United States Forest Service Region 5, the Bureau of Land Management, and the National Park Service jointly develop the fire perimeter GIS layer for public and private lands throughout California at the end of the calendar year. Upon release, the data is current as of the last calendar year.
The fire perimeter database represents the most complete digital record of fire perimeters in California. However it is still incomplete in many respects. Fire perimeter database users must exercise caution to avoid inaccurate or erroneous conclusions. For more information on potential errors and their source please review the methodology section of these pages.
The fire perimeters database is an Esri ArcGIS file geodatabase with three data layers (feature classes):
There are many uses for fire perimeter data. For example, it is used on incidents to locate recently burned areas that may affect fire behavior (see map left).
Other uses include:
Statewide 2016 Lidar points colorized with 2018 NAIP imagery as a scene created by Esri using ArcGIS Pro for the entire State of Connecticut. This service provides the colorized Lidar point in interactive 3D for visualization, interaction of the ability to make measurements without downloading.Lidar is referenced at https://cteco.uconn.edu/data/lidar/ and can be downloaded at https://cteco.uconn.edu/data/download/flight2016/. Metadata: https://cteco.uconn.edu/data/flight2016/info.htm#metadata. The Connecticut 2016 Lidar was captured between March 11, 2016 and April 16, 2016. Is covers 5,240 sq miles and is divided into 23, 381 tiles. It was acquired by the Captiol Region Council of Governments with funding from multiple state agencies. It was flown and processed by Sanborn. The delivery included classified point clouds and 1 meter QL2 DEMs. The 2016 Lidar is published on the Connecticut Environmental Conditions Online (CT ECO) website. CT ECO is the collaborative work of the Connecticut Department of Energy and Environmental Protection (DEEP) and the University of Connecticut Center for Land Use Education and Research (CLEAR) to share environmental and natural resource information with the general public. CT ECO's mission is to encourage, support, and promote informed land use and development decisions in Connecticut by providing local, state and federal agencies, and the public with convenient access to the most up-to-date and complete natural resource information available statewide.Process used:Extract Building Footprints from Lidar1. Prepare Lidar - Download 2016 Lidar from CT ECO- Create LAS Dataset2. Extract Building Footprints from LidarUse the LAS Dataset in the Classify Las Building Tool in ArcGIS Pro 2.4.Colorize LidarColorizing the Lidar points means that each point in the point cloud is given a color based on the imagery color value at that exact location.1. Prepare Imagery- Acquire 2018 NAIP tif tiles from UConn (originally from USDA NRCS).- Create mosaic dataset of the NAIP imagery.2. Prepare and Analyze Lidar Points- Change the coordinate system of each of the lidar tiles to the Projected Coordinate System CT NAD 83 (2011) Feet (EPSG 6434). This is because the downloaded tiles come in to ArcGIS as a Custom Projection which cannot be published as a Point Cloud Scene Layer Package.- Convert Lidar to zlas format and rearrange. - Create LAS Datasets of the lidar tiles.- Colorize Lidar using the Colorize LAS tool in ArcGIS Pro. - Create a new LAS dataset with a division of Eastern half and Western half due to size limitation of 500GB per scene layer package. - Create scene layer packages (.slpk) using Create Cloud Point Scene Layer Package. - Load package to ArcGIS Online using Share Package. - Publish on ArcGIS.com and delete the scene layer package to save storage cost.Additional layers added:Visit https://cteco.uconn.edu/projects/lidar3D/layers.htm for a complete list and links. 3D Buildings and Trees extracted by Esri from the lidarShaded Relief from CTECOImpervious Surface 2012 from CT ECONAIP Imagery 2018 from CTECOContours (2016) from CTECOLidar 2016 Download Link derived from https://www.cteco.uconn.edu/data/download/flight2016/index.htm
Sometimes a basic solid color for your map's labels and text just isn't going to cut it. Here is an ArcGIS Pro style with light and dark gradient fills and shadow/glow effects that you can apply to map text via the "Text fill symbol" picker in your label pane. Level up those labels! Make them look touchable. Glassy. Shady. Intriguing.Find a how-to here.Save this style, add it to your ArcGIS Pro project, then use it for any text (including labels).**UPDATE**I've added a symbol that makes text look like is being illuminated from below, casting a shadow upwards and behind. Pretty dramatic if you ask me. Here is an example:Happy Mapping! John Nelson