Facebook
TwitterThis personal geodatabase contains raster images of turbidity in the Gulf of Maine. These raster images are monthly composites, and were calculated as means or as medians. The image composites span from 9/1997 to 6/2005.These images were also reprocessed to remove land and no data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for turbidity in inverse steradians.
Facebook
TwitterThis personal geodatabase contains raster images of sea surface temperature (SST) in the Gulf of Maine. These raster images are monthly composites, and were calculated as means or as medians. The image composites span from 9/1997 to 6/2005.These images were also reprocessed to remove land and no data values (value = 252 and value = 0 respectively), as well as to calculate the real world values for sea surface temperature in degrees Celsius (C).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This personal geodatabase contains raster images of turbidity in the Gulf of Maine. These raster images are a composite of several years (1997-2005) binned by season or by month, and were calculated as means or as medians.For those images binned by month, all of the months for the time series were averaged together to make one mean image. For example, Jan '98, Jan '99, Jan 00, Jan 01, Jan 02, Jan 03, Jan 04 = Grand Monthly Mean for January.For those images binned by season, the seasons were defined as the following: 1) Fall= September, October, November; 2) Winter= December, January, February; 3) Spring= March, April, May; 4) Summer= June, July, August. All chlorophyll geotifs binned by year and season were then averaged again to create a grand mean for each season. For example, spring '98, spring '99, spring 00, spring 01, spring 02, spring 03, spring 04 = Grand Seasonal Mean for Spring.These images were also reprocessed to remove land and no data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for turbidty in inverse steradians.
Facebook
TwitterThis personal geodatabase contains raster images of turbidity in the Gulf of Maine. These raster images are seasonal composites, and were calculated as means or as medians. The seasons were defined as the following: 1) Fall= September, October, November; 2) Winter= December, January, February; 3) Spring= March, April, May; 4) Summer= June, July, August. For example, June '98, July '98, August '98 = Turbidity Summer '98.These images were also reprocessed to remove land and no data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for turbidity in inverse steradians.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This personal geodatabase contains raster images of sea surface temperature (SST) in the Gulf of Maine. These raster images are a composite of several years (1997-2005) binned by season or by month, and were calculated as means or as medians. For those images binned by month, all of the months for the time series were averaged together to make one mean image. For example, Jan '98, Jan '99, Jan 00, Jan 01, Jan 02, Jan 03, Jan 04 = Grand Monthly Mean for January. For those images binned by season, the seasons were defined as the following: 1) Fall= September, October, November; 2) Winter= December, January, February; 3) Spring= March, April, May; 4) Summer= June, July, August. All chlorophyll geotifs binned by year and season were then averaged again to create a grand mean for each season. For example, spring '98, spring '99, spring 00, spring 01, spring 02, spring 03, spring 04 = Grand Seasonal Mean for Spring.These images were also reprocessed to remove land and no data values (value = 252 and value = 0 respectively), as well as to calculate the real world values for sea surface temperature in degrees Celsius (C).
Facebook
TwitterThis personal geodatabase contains raster images of sea surface temperature (SST) in the Gulf of Maine. These raster images are seasonal composites, and were calculated as means or as medians.The seasons were defined as the following: 1) Fall= September, October, November; 2) Winter= December, January, February; 3) Spring= March, April, May; 4) Summer= June, July, August. For example, June '98, July '98, August '98 = SST Summer '98.These images were also reprocessed to remove land and no data values (value = 252 and value = 0 respectively), as well as to calculate the real world values for sea surface temperature in degrees Celsius (C).
Facebook
TwitterThis personal geodatabase contains raster images of chlorophyll concentrations in the Gulf of Maine. These raster images are a composite of several years (1997-2005) binned by season or by month, and were calculated as means or as medians. For those images binned by month, all of the months for the time series were averaged together to make one mean image. For example, Jan '98, Jan '99, Jan 00, Jan 01, Jan 02, Jan 03, Jan 04 = Grand Monthly Mean for January. For those images binned by season, the seasons were defined as the following: 1) Fall= September, October, November; 2) Winter= December, January, February; 3) Spring= March, April, May; 4) Summer= June, July, August. All chlorophyll geotifs binned by year and season were then averaged again to create a grand mean for each season. For example, spring '98, spring '99, spring 00, spring 01, spring 02, spring 03, spring 04 = Grand Seasonal Mean for Spring. These images were also reprocessed to remove land and no data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for chlorophyll in micrograms per liter.
Facebook
TwitterThis personal geodatabase contains raster images of chlorophyll concentrations in the Gulf of Maine. These raster images are seasonal composites, and were calculated as means or as medians. The seasons were defined as the following: 1) Fall= September, October, November; 2) Winter= December, January, February; 3) Spring= March, April, May; 4) Summer= June, July, August. For example, June '98, July '98, August '98 = Chlorophyll Summer '98. These images were also reprocessed to remove land and no data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for chlorophyll in micrograms per liter.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This personal geodatabase contains raster images of chlorophyll concentrations in the Gulf of Maine. These raster images are monthly composites, and were calculated as means or as medians. The image composites span from 9/1997 to 6/2005. These images were also reprocessed to remove land and no-data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for chlorophyll in micrograms per liter.
Facebook
TwitterThis data release contains lake and reservoir water surface temperature summary statistics calculated from Landsat 8 Analysis Ready Dataset (ARD) images available within the Conterminous United States (CONUS) from 2013-2023. All zip files within this data release contain nested directories using .parquet files to store the data. The file example_script_for_using_parquet.R contains example code for using the R arrow package (Richardson and others, 2024) to open and query the nested .parquet files. Limitations with this dataset include: - All biases inherent to the Landsat Surface Temperature product are retained in this dataset which can produce unrealistically high or low estimates of water temperature. This is observed to happen, for example, in cases with partial cloud coverage over a waterbody. - Some waterbodies are split between multiple Landsat Analysis Ready Data tiles or orbit footprints. In these cases, multiple waterbody-wide statistics may be reported - one for each data tile. The deepest point values will be extracted and reported for tile covering the deepest point. A total of 947 waterbodies are split between multiple tiles (see the multiple_tiles = “yes” column of site_id_tile_hv_crosswalk.csv). - Temperature data were not extracted from satellite images with more than 90% cloud cover. - Temperature data represents skin temperature at the water surface and may differ from temperature observations from below the water surface. Potential methods for addressing limitations with this dataset: - Identifying and removing unrealistic temperature estimates: - Calculate total percentage of cloud pixels over a given waterbody as: percent_cloud_pixels = wb_dswe9_pixels/(wb_dswe9_pixels + wb_dswe1_pixels), and filter percent_cloud_pixels by a desired percentage of cloud coverage. - Remove lakes with a limited number of water pixel values available (wb_dswe1_pixels < 10) - Filter waterbodies where the deepest point is identified as water (dp_dswe = 1) - Handling waterbodies split between multiple tiles: - These waterbodies can be identified using the "site_id_tile_hv_crosswalk.csv" file (column multiple_tiles = “yes”). A user could combine sections of the same waterbody by spatially weighting the values using the number of water pixels available within each section (wb_dswe1_pixels). This should be done with caution, as some sections of the waterbody may have data available on different dates. All zip files within this data release contain nested directories using .parquet files to store the data. The example_script_for_using_parquet.R contains example code for using the R arrow package to open and query the nested .parquet files. - "year_byscene=XXXX.zip" – includes temperature summary statistics for individual waterbodies and the deepest points (the furthest point from land within a waterbody) within each waterbody by the scene_date (when the satellite passed over). Individual waterbodies are identified by the National Hydrography Dataset (NHD) permanent_identifier included within the site_id column. Some of the .parquet files with the byscene datasets may only include one dummy row of data (identified by tile_hv="000-000"). This happens when no tabular data is extracted from the raster images because of clouds obscuring the image, a tile that covers mostly ocean with a very small amount of land, or other possible. An example file path for this dataset follows: year_byscene=2023/tile_hv=002-001/part-0.parquet -"year=XXXX.zip" – includes the summary statistics for individual waterbodies and the deepest points within each waterbody by the year (dataset=annual), month (year=0, dataset=monthly), and year-month (dataset=yrmon). The year_byscene=XXXX is used as input for generating these summary tables that aggregates temperature data by year, month, and year-month. Aggregated data is not available for the following tiles: 001-004, 001-010, 002-012, 028-013, and 029-012, because these tiles primarily cover ocean with limited land, and no output data were generated. An example file path for this dataset follows: year=2023/dataset=lakes_annual/tile_hv=002-001/part-0.parquet - "example_script_for_using_parquet.R" – This script includes code to download zip files directly from ScienceBase, identify HUC04 basins within desired landsat ARD grid tile, download NHDplus High Resolution data for visualizing, using the R arrow package to compile .parquet files in nested directories, and create example static and interactive maps. - "nhd_HUC04s_ingrid.csv" – This cross-walk file identifies the HUC04 watersheds within each Landsat ARD Tile grid. -"site_id_tile_hv_crosswalk.csv" - This cross-walk file identifies the site_id (nhdhr{permanent_identifier}) within each Landsat ARD Tile grid. This file also includes a column (multiple_tiles) to identify site_id's that fall within multiple Landsat ARD Tile grids. - "lst_grid.png" – a map of the Landsat grid tiles labelled by the horizontal – vertical ID.
Facebook
TwitterThis layer quantifies the yearly economic value of nitrogen removal in Maryland's forest and wetland areas. Economic values are based on a number of factors, including the average cost to remove nutrients using best management practices, amount of funds provided for the BMP cost share program through the state and the Bay Restoration Fund , and the price on nutrient trading markets. The averaege value is $8.36 per lb nitrogen. Urban and agricultural lands are particularly important nutrient sources, and forests and wetlands in watersheds with high incidence of these land-uses tend to have high values of the nitrogen removal service. This service totals $402.6 million for Maryland yearly.
This data layer was created as part of the Maryland Department of Natural Resources "Accounting for Maryland's Ecosystem Services" program.This is a MD iMAP hosted service. Find more information on https://imap.maryland.gov.Map Service Link: https://mdgeodata.md.gov/imap/rest/services/Environment/MD_EcosystemServices/MapServer/17Download the Ecosystem Services layers at: https://www.dropbox.com/s/e6ovfcc01dxvnmo/EcosystemServices.gdb.zip?dl=0
Facebook
TwitterReason for Selection Protected natural areas in urban environments provide urban residents a nearby place to connect with nature and offer refugia for some species. They help foster a conservation ethic by providing opportunities for people to connect with nature, and also support ecosystem services like offsetting heat island effects (Greene and Millward 2017, Simpson 1998), water filtration, stormwater retention, and more (Hoover and Hopton 2019). In addition, parks, greenspace, and greenways can help improve physical and psychological health in communities (Gies 2006). Urban park size complements the equitable access to potential parks indicator by capturing the value of existing parks.Input DataSoutheast Blueprint 2024 extentFWS National Realty Tracts, accessed 12-13-2023Protected Areas Database of the United States(PAD-US):PAD-US 3.0 national geodatabase -Combined Proclamation Marine Fee Designation Easement, accessed 12-6-20232020 Census Urban Areas from the Census Bureau’s urban-rural classification; download the data, read more about how urban areas were redefined following the 2020 censusOpenStreetMap data “multipolygons” layer, accessed 12-5-2023A polygon from this dataset is considered a beach if the value in the “natural” tag attribute is “beach”. Data for coastal states (VA, NC, SC, GA, FL, AL, MS, LA, TX) were downloaded in .pbf format and translated to an ESRI shapefile using R code. OpenStreetMap® is open data, licensed under theOpen Data Commons Open Database License (ODbL) by theOpenStreetMap Foundation (OSMF). Additional credit to OSM contributors. Read more onthe OSM copyright page.2021 National Land Cover Database (NLCD): Percentdevelopedimperviousness2023NOAA coastal relief model: volumes 2 (Southeast Atlantic), 3 (Florida and East Gulf of America), 4 (Central Gulf of America), and 5 (Western Gulf of America), accessed 3-27-2024Mapping StepsCreate a seamless vector layer to constrain the extent of the urban park size indicator to inland and nearshore marine areas <10 m in depth. The deep offshore areas of marine parks do not meet the intent of this indicator to capture nearby opportunities for urban residents to connect with nature. Shallow areas are more accessible for recreational activities like snorkeling, which typically has a maximum recommended depth of 12-15 meters. This step mirrors the approach taken in the Caribbean version of this indicator.Merge all coastal relief model rasters (.nc format) together using QGIS “create virtual raster”.Save merged raster to .tif and import into ArcPro.Reclassify the NOAA coastal relief model data to assign areas with an elevation of land to -10 m a value of 1. Assign all other areas (deep marine) a value of 0.Convert the raster produced above to vector using the “RasterToPolygon” tool.Clip to 2024 subregions using “Pairwise Clip” tool.Break apart multipart polygons using “Multipart to single parts” tool.Hand-edit to remove deep marine polygon.Dissolve the resulting data layer.This produces a seamless polygon defining land and shallow marine areas.Clip the Census urban area layer to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Clip PAD-US 3.0 to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Remove the following areas from PAD-US 3.0, which are outside the scope of this indicator to represent parks:All School Trust Lands in Oklahoma and Mississippi (Loc Des = “School Lands” or “School Trust Lands”). These extensive lands are leased out and are not open to the public.All tribal and military lands (“Des_Tp” = "TRIBL" or “Des_Tp” = "MIL"). Generally, these lands are not intended for public recreational use.All BOEM marine lease blocks (“Own_Name” = "BOEM"). These Outer Continental Shelf lease blocks do not represent actively protected marine parks, but serve as the “legal definition for BOEM offshore boundary coordinates...for leasing and administrative purposes” (BOEM).All lands designated as “proclamation” (“Des_Tp” = "PROC"). These typically represent the approved boundary of public lands, within which land protection is authorized to occur, but not all lands within the proclamation boundary are necessarily currently in a conserved status.Retain only selected attribute fields from PAD-US to get rid of irrelevant attributes.Merged the filtered PAD-US layer produced above with the OSM beaches and FWS National Realty Tracts to produce a combined protected areas dataset.The resulting merged data layer contains overlapping polygons. To remove overlapping polygons, use the Dissolve function.Clip the resulting data layer to the inland and nearshore extent.Process all multipart polygons (e.g., separate parcels within a National Wildlife Refuge) to single parts (referred to in Arc software as an “explode”).Select all polygons that intersect the Census urban extent within 0.5 miles. We chose 0.5 miles to represent a reasonable walking distance based on input and feedback from park access experts. Assuming a moderate intensity walking pace of 3 miles per hour, as defined by the U.S. Department of Health and Human Service’s physical activity guidelines, the 0.5 mi distance also corresponds to the 10-minute walk threshold used in the equitable access to potential parks indicator.Dissolve all the park polygons that were selected in the previous step.Process all multipart polygons to single parts (“explode”) again.Add a unique ID to the selected parks. This value will be used in a later step to join the parks to their buffers.Create a 0.5 mi (805 m) buffer ring around each park using the multiring plugin in QGIS. Ensure that “dissolve buffers” is disabled so that a single 0.5 mi buffer is created for each park.Assess the amount of overlap between the buffered park and the Census urban area using “overlap analysis”. This step is necessary to identify parks that do not intersect the urban area, but which lie within an urban matrix (e.g., Umstead Park in Raleigh, NC and Davidson-Arabia Mountain Nature Preserve in Atlanta, GA). This step creates a table that is joined back to the park polygons using the UniqueID.Remove parks that had ≤10% overlap with the urban areas when buffered. This excludes mostly non-urban parks that do not meet the intent of this indicator to capture parks that provide nearby access for urban residents. Note: The 10% threshold is a judgement call based on testing which known urban parks and urban National Wildlife Refuges are captured at different overlap cutoffs and is intended to be as inclusive as possible.Calculate the GIS acres of each remaining park unit using the Add Geometry Attributes function.Buffer the selected parks by 15 m. Buffering prevents very small and narrow parks from being left out of the indicator when the polygons are converted to raster.Reclassify the parks based on their area into the 7 classes seen in the final indicator values below. These thresholds were informed by park classification guidelines from the National Recreation and Park Association, which classify neighborhood parks as 5-10 acres, community parks as 30-50 acres, and large urban parks as optimally 75+ acres (Mertes and Hall 1995).Assess the impervious surface composition of each park using the NLCD 2021 impervious layer and the Zonal Statistics “MEAN” function. Retain only the mean percent impervious value for each park.Extract only parks with a mean impervious pixel value <80%. This step excludes parks that do not meet the intent of the indicator to capture opportunities to connect with nature and offer refugia for species (e.g., the Superdome in New Orleans, LA, the Astrodome in Houston, TX, and City Plaza in Raleigh, NC).Extract again to the inland and nearshore extent.Export the final vector file to a shapefile and import to ArcGIS Pro.Convert the resulting polygons to raster using the ArcPy Feature to Raster function and the area class field.Assign a value of 0 to all other pixels in the Southeast Blueprint 2024 extent not already identified as an urban park in the mapping steps above. Zero values are intended to help users better understand the extent of this indicator and make it perform better in online tools.Use the land and shallow marine layer and “extract by mask” tool to save the final version of this indicator.Add color and legend to raster attribute table.As a final step, clip to the spatial extent of Southeast Blueprint 2024.Note: For more details on the mapping steps, code used to create this layer is available in theSoutheast Blueprint Data Downloadunder > 6_Code. Final indicator valuesIndicator values are assigned as follows:6= 75+ acre urban park5= 50 to <75 acre urban park4= 30 to <50 acre urban park3= 10 to <30 acre urban park2=5 to <10acreurbanpark1 = <5 acre urban park0 = Not identified as an urban parkKnown IssuesThis indicator does not include park amenities that influence how well the park serves people and should not be the only tool used for parks and recreation planning. Park standards should be determined at a local level to account for various community issues, values, needs, and available resources.This indicator includes some protected areas that are not open to the public and not typically thought of as “parks”, like mitigation lands, private easements, and private golf courses. While we experimented with excluding them using the public access attribute in PAD, due to numerous inaccuracies, this inadvertently removed protected lands that are known to be publicly accessible. As a result, we erred on the side of including the non-publicly accessible lands.The NLCD percent impervious layer contains classification inaccuracies. As a result, this indicator may exclude parks that are mostly natural because they are misclassified as mostly impervious. Conversely, this indicator may include parks that are mostly impervious because they are misclassified as mostly
Facebook
TwitterReason for SelectionNearshore waters where mangroves, seagrass, and coral are all present and within close proximity to one another are likely to support higher densities and diversity of fish species (Pittman et al. 2007). The co-occurrence of these three habitats supports healthy coastal ecosystems and connected seascapes (Gillis et al. 2017). While the movement and dispersal patterns for different fish species can vary widely, mangroves, seagrass, and coral provide key ecological services and functions for many species. For example, mangroves and seagrass beds serve as important nursery habitats, especially for fish species that, as adults, also depend on coral reefs (Nagelkerken et. al 2001). Many fish species, like mangrove snapper and yellowtail snapper, move through all three habitat types within their home ranges (Pittman et al. 2007). The 300 m and 600 m distance thresholds used in this indicator draw on personal communication with Dr. Simon Pittman (1-25-2023) and several studies examining seascape structure and the number and diversity of fish species present at different distances from various habitat types. Research in southwest Puerto Rico shows that the positive impact of co-occurring mangrove, seagrass, and coral reef habitat on fish abundance is species-specific and strongest at 100 m but ranges between 50 and 600 m (Pittman et al. 2007). In a decision support framework developed for the U.S. Virgin Islands, “coral reefs were deemed strongly connected where they existed within 300 m of seagrasses, mangroves and other reefs” (Pittman et al. 2018). These findings align with research in Australia and the western Pacific that considered habitats within 250-500 m of one another to be highly connected (Olds et al. 2012; Martin et al. 205), and a study in the United Arab Emirates that used a 500 m buffer to prioritize relationships between mangrove, seagrass, and reefs (Pittman et al. 2022).Input DataThe Nature Conservancy’s (TNC) Caribbean benthic habitat maps; read a press release about the data; read a scientific journal article about the data; request to download the data2012 National Oceanic and Atmospheric Administration (NOAA) Coastal Change Analysis Program (C-CAP) land cover files for the U.S. Virgin Islands (St. Thomas, St. John, and St. Croix are provided as separate rasters) accessed 4-26-2022; learn more about C-CAP high resolution land cover and change products2010 NOAA C-CAP land cover files for Puerto Rico, accessed 4-26-2022; learn more about C-CAP high resolution land cover and change productsSoutheast Blueprint 2023 subregions: Caribbean Southeast Blueprint 2023 extentMapping StepsMosaic the benthic data for Puerto Rico and the U.S. Virgin Islands.Mosaic the C-CAP landcover data for Puerto Rico and the U.S. Virgin Islands.Reproject and do a majority resample of the TNC benthic data to 30 m pixels.Reproject and do a majority resample of the C-CAP data to 30 m pixels.Create a seagrass raster by reclassifying the TNC benthic data so that “dense seagrass” is 1, all other data is 0, and NoData is 1,000. The NoData value helps later in the analysis to remove land and deal with differences in NoData between the C-CAP and TNC benthic data.Create a coral raster by reclassifying the TNC benthic data so that “Reef Crest”, “Fore Reef”, “Back Reef”, “Coral/Algae”, and “Spur and Groove Reef” are 1, all other classes are 0, and NoData is 0.Create a mangrove raster by reclassifying the C-CAP data so that “Estuarine forested wetland” is 1, all other classes are 0, and NoData is 0.Combine the mangrove and seagrass data to make a water mask to remove land (including mangroves) from the final indicator.600 m analysis: For each habitat raster (mangrove, seagrass, and coral), use a 20 cell radius circle and maximum in focal statistics to identify areas with at least one pixel of those habitats. Sum these rasters together and multiply by the presence of mangrove (0/1) to get habitat diversity. This step calculates how many of the distinct habitat types occur within a 600 m radius. It also removes the actual mangrove pixels, as this indicator targets the estuarine and marine habitats near mangroves.300 m analysis: Repeat the same steps from the 600 m analysis but use a 10 cell radius.Combine the 600 m analysis and 300 m analysis to get final indicator classes. Clip to the Caribbean Blueprint 2023 subregion.As a final step, clip to the spatial extent of Southeast Blueprint 2023. Note: For more details on the mapping steps, code used to create this layer is available in the Southeast Blueprint Data Download under > 6_Code. Final indicator valuesIndicator values are assigned as follows:4 = Highest predicted fish density/diversity (mangrove, coral, and dense seagrass all present within 300 m) 3 = Very high predicted fish density/diversity (either mangrove and coral, mangrove and dense seagrass, or coral and dense seagrass present within 300 m) 2 = High predicted fish density/diversity (mangrove, coral, and dense seagrass all present within 600 m) 1 = Medium predicted fish density/diversity (either mangrove and coral, mangrove and dense seagrass, or coral and dense seagrass present within 600 m) 0 = Low predicted fish density/diversity (no coral, mangrove, or dense seagrass present within 600 m of one other)Known IssuesFor some pixels at the edge of the Caribbean subregion, less than half of the 30 m pixel is covered by the finer resolution TNC benthic data. These cells are classified as NoData in the indicator.The distances used in this indicator are primarily based on a study conducted in southwest Puerto Rico (Pittman et al. 2007), a decision support framework developed for the U.S. Virgin Islands, and personal communication with the principal investigator of those projects, Dr. Simon Pittman (1-25-2023). While other similar studies in Australia, the western Pacific, and the United Arab Emirates support the distance thresholds chosen for this analysis (Olds et al. 2012, Martin et al. 2015, Pittman et al. 2022), different distances may be more appropriate for other parts of the U.S. Caribbean.This indicator may overestimate the fish habitat value of some terrestrial areas that were evaluated by the TNC benthic habitat dataset (e.g., areas near Limetree Bay Refinery in the southern part of St. Croix, USVI).Disclaimer: Comparing with Older Indicator Versions There are numerous problems with using Southeast Blueprint indicators for change analysis. Please consult Blueprint staff if you would like to do this (email hilary_morris@fws.gov).Literature CitedGillis, L. G., Jones, C. G., Ziegler, A. D., van der Wal, D., Breckwoldt, A., and Bouma, T. J. 2017. Opportunities for protecting and restoring tropical coastal ecosystems by utilizing a physical connectivity approach. Front. Marine Sci. 4:374. [https://www.frontiersin.org/articles/10.3389/fmars.2017.00374/full]. Martin TSH, Olds AD, Pitt KA, Johnston AB, Butler IR, Maxwell PS, Connolly RM. 2015. Effective protection of fish on inshore coral reefs depends on the scale of mangrove-reef connectivity. Mar Ecol Prog Ser 527:157-165. [https://doi.org/10.3354/meps11295]. Nagelkerken, Ivan & Kleijnen, Sarah & Klop, T & Brand, RACJ & Morinière, EC & Van der Velde, Gerard. (2001). Dependence of Caribbean reef fishes on mangroves and seagrass beds as nursery habitats: A comparison of fish faunas between bays with and without mangroves/seagrass beds. Marine Ecology-progress Series - MAR ECOL-PROGR SER. 214. 225-235. 10.3354/meps214225. [https://www.int-res.com/articles/meps/214/m214p225.pdf]. Olds, AD, Connolly, RM, Pitt, KA., Maxwell, PS. (2012). Habitat connectivity improves reserve performance. Conservation Letters 5: 56–63. [https://conbio.onlinelibrary.wiley.com/doi/10.1111/j.1755-263X.2011.00204.x]. Pittman SJ, Caldow C, Hile SD, Monaco ME. 2007. Using seascape types to explain the spatial patterns of fish in the mangroves of SW Puerto Rico. Marine Ecology Progress Series. Vol. 348: 273–284. [https://doi.org/10.3354/meps07052]. Pittman, S.J., Poti, M., Jeffrey, C.F., Kracker, L.M. and Mabrouk, A., 2018. Decision support framework for the prioritization of coral reefs in the US Virgin Islands. Ecological Informatics, 47, pp.26-34. [https://www.sciencedirect.com/science/article/abs/pii/S1574954117300614]. Pittman, S.J. et al 2022. Rapid site selection to prioritize coastal seascapes for nature-based solutions with multiple benefits. Frontiers in Marine Science, 9, p.571. [https://www.frontiersin.org/articles/10.3389/fmars.2022.832480/full]. Schill SR, McNulty VP, Pollock FJ, Lüthje F, Li J, Knapp DE, Kington JD, McDonald T, Raber GT, Escovar-Fadul X, Asner GP. Regional High-Resolution Benthic Habitat Data from Planet Dove Imagery for Conservation Decision-Making and Marine Planning. Remote Sensing. 2021; 13(21):4215. [https://doi.org/10.3390/rs13214215].
Facebook
TwitterThe LAS data set was originally classified according to 4 classes (ground, water, bridge overpass, and noise), with the rest of the data being unclassified. That left some classes to be derived and classified, of which one—the building/ structure class—was considered necessary for this project. In theory, deriving a building/structure layer is relatively straightforward: the building reflectance response should be unclassified, single-reflectance response points, whereas the vegetation, also unclassified, should yield a multiple-reflectance response as the beam bounces back through the canopy. Following this idea, we created a Digital Surface Model (DSM) from the single-response, unclassified LAS point cloud. We then subtracted these DSMs from the Bare Earth DEMs to create a difference image, which ideally should represent only buildings. Unfortunately, many trees were included in this “buildings” layer, due possibly to the sparse canopy that is characteristic of trees found in southwestern forests and possibly to the presence of fairly recent burn scars that include a number of standing dead trees and snags. In an attempt to remove the clutter of false positives due to trees, we developed a Normalized Difference Vegetation Index (NDVI) from the NAIP imagery acquired over the area in the same year. The NDVI is an image-processing technique that uses the reflective information found in the red (Red) and near-infrared (NIR) wavelengths to enhance the “green” vegetative response over other, non-vegetated surface features (Eq. 1). NDVI = (NIR−Red)/(NIR+Red) [Eq. 1]. This provides a floating-point image of values from -1 to 1, with numbers above 0 representing increasing vegetative cover. We further modified the NDVI equation to create an 8-bit image (Eq. 2). NDVImod = (NDVI+1)*100 [Eq. 2]. This 8-bit image had all positive integer values, where values above 100 indicated increasing vegetative cover. We used the generated NDVI image, in particular values above 109, to mask out many of the false anomalies. In addition, all heights less than 6 feet were masked out, as this was considered a minimum height for most buildings. We added 1 to values in the resulting image so that all values, even the zeroes, would be counted. Then values were clumped to produce an image of individually coded raster polygons. We eliminated all clusters smaller than 32 square meters (345 square feet) from the clumped image, ran a 3x3 majority filter to remove relict edges, and ran a 3x3 morphological close filter to remove holes in the raster polygons. We completed the raster processing in ERDAS IMAGINE and then converted the data set to a polygon layer in ESRI ArcGIS, as is and without using the ‘simplify polygon’ option. This was cleaned up further using the simplify buildings module with a minimum spacing of 2 meters. Once this was completed, the polygon layer was edited using the NAIP imagery and DSM Shaded Relief imagery as a background by a heads-up digitizing at a 1:3,000 scale (the approximate base resolution of the LiDAR data). The building/structure layer contained more than 44,612 identified structures.
Facebook
TwitterProcessed results from of surface grain size analysis of the sediment grab samples recovered as part of the Long Island Sound mapping project Phase II.Sediment grab samples have been taken in summer of 2017 and 2018 using a modified van Veen grab sampler. A sub-sample of the top two centimeter was taken and stored in a jar. Dried sub-samples samples were analyzed for grain size. First the samples were treated with hydroperoxide to remove organic components. Then the sample was passed through a series of standard sieves representing Phi sizes with the smallest being 64 µm. The content of each sieve was dried and weight. If there was sufficient fine material (< 64 µm), then this fine fraction was further analyzed using a Sedigraph system. The results of sieving and sedigraph analysis have been combined and the percentages for gravel, sand, silt and clay are determined following the Wentworth scale. In addition, other statistics including mean, median, skewness and standard deviation are calculated using the USGS GSSTAT program. The results of the LDEO/Queens College grain size analysis have been combined with data collected by the LISMARC group and analyzed by USGS. ArcGIS Pro empirical kriging has been used to interpolate values for gravel, sand, silt, clay, and mud percentages as well as for mean grain size onto a 50 m raster. The interpolated raster has been clipped to fit the extent of the phase 2 survey area. The final raster data are in GeoTiff format with UTM 18 N projection.Time period of content: 2017-08-01 to 2022-11-16Attribute accuracy: The attribute accuracy has not been determined. This raster dataset shown mainly the major trends and patterns of the value distribution in the Phase 2 study area.Completeness: The dataset is complete.Positional accuracy: The raster resolution is 50 m.Attributes:gravel pct raster: Interpolated gravel percent of the sample mass
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Dataset description-br /- This dataset is a recalculation of the Copernicus 2015 high resolution layer (HRL) of imperviousness density data (IMD) at different spatial/territorial scales for the case studies of Barcelona and Milan. The selected spatial/territorial scales are the following: * a) Barcelona city boundaries * b) Barcelona metropolitan area, Àrea Metropolitana de Barcelona (AMB) * c) Barcelona greater city (Urban Atlas) * d) Barcelona functional urban area (Urban Atlas) * e) Milan city boundaries * f) Milan metropolitan area, Piano Intercomunale Milanese (PIM) * g) Milan greater city (Urban Atlas) * h) Milan functional urban area (Urban Atlas)-br /- In each of the spatial/territorial scales listed above, the number of 20x20mt cells corresponding to each of the 101 values of imperviousness (0-100% soil sealing: 0% means fully non-sealed area; 100% means fully sealed area) is provided, as well as the converted measure into squared kilometres (km2). -br /- -br /- -br /- Dataset composition-br /- The dataset is provided in .csv format and is composed of: -br /- _IMD15_BCN_MI_Sources.csv_: Information on data sources -br /- _IMD15_BCN.csv_: This file refers to the 2015 high resolution layer of imperviousness density (IMD) for the selected territorial/spatial scales in Barcelona: * a) Barcelona city boundaries (label: bcn_city) * b) Barcelona metropolitan area, Àrea metropolitana de Barcelona (AMB) (label: bcn_amb) * c) Barcelona greater city (Urban Atlas) (label: bcn_grc) * d) Barcelona functional urban area (Urban Atlas) (label: bcn_fua)-br /- _IMD15_MI.csv_: This file refers to the 2015 high resolution layer of imperviousness density (IMD) for the selected territorial/spatial scales in Milan: * e) Milan city boundaries (label: mi_city) * f) Milan metropolitan area, Piano intercomunale milanese (PIM) (label: mi_pim) * g) Milan greater city (Urban Atlas) (label: mi_grc) * h) Milan functional urban area (Urban Atlas) (label: mi_fua)-br /- _IMD15_BCN_MI.mpk_: the shareable project in Esri ArcGIS format including the HRL IMD data in raster format for each of the territorial boundaries as specified in letter a)-h). -br /- Regarding the territorial scale as per letter f), the list of municipalities included in the Milan metropolitan area in 2016 was provided to me in 2016 from a person working at the PIM. -br /- In the IMD15_BCN.csv and IMD15_MI.csv, the following columns are included: * Level: the territorial level as defined above (a)-d) for Barcelona and e)-h) for Milan); * Value: the 101 values of imperviousness density expressed as a percentage of soil sealing (0-100%: 0% means fully non-sealed area; 100% means fully sealed area); * Count: the number of 20x20mt cells corresponding to a certain percentage of soil sealing or imperviousness; * Km2: the conversion of the 20x20mt cells into squared kilometres (km2) to facilitate the use of the dataset.-br /- -br /- -br /- Further information on the Dataset-br /- This dataset is the result of a combination between different databases of different types and that have been downloaded from different sources. Below, I describe the main steps in data management that resulted in the production of the dataset in an Esri ArcGIS (ArcMap, Version 10.7) project.-br /- 1. The high resolution layer (HRL) of the imperviousness density data (IMD) for 2015 has been downloaded from the official website of Copernicus. At the time of producing the dataset (April/May 2021), the 2018 version of the IMD HRL database was not yet validated, so the 2015 version was chosen instead. The type of this dataset is raster. 2. For both Barcelona and Milan, shapefiles of their administrative boundaries have been downloaded from official sources, i.e. the ISTAT (Italian National Statistical Institute) and the ICGC (Catalan Institute for Cartography and Geology). These files have been reprojected to match the IMD HRL projection, i.e. ETRS 1989 LAEA. 3. Urban Atlas (UA) boundaries for the Greater Cities (GRC) and Functional Urban Areas (FUA) of Barcelona and Milan have been checked and reconstructed in Esri ArcGIS from the administrative boundaries files by using a Eurostat correspondence table. This is because at the time of the dataset creation (April/May 2021), the 2018 Urban Atlas shapefiles for these two cities were not fully updated or validated on the Copernicus Urban Atlas website. Therefore, I had to re-create the GRC and FUA boundaries by using the Eurostat correspondence table as an alternative (but still official) data source. The use of the Eurostat correspondence table with the codes and names of municipalities was also useful to detect discrepancies, basically stemming from changes in municipality names and codes and that created inconsistent spatial features. When detected, these discrepancies have been checked with the ISTAT and ICGC offices in charge of producing Urban Atlas data before the final GRC and FUA boundaries were defined.-br /- Steps 2) and 3) were the most time consuming, because they required other tools to be used in Esri ArcGIS, like spatial joins and geoprocessing tools for shapefiles (in particular dissolve and area re-calculator in editing sessions) for each of the spatial/territorial scales as indicated in letters a)-h). -br /- Once the databases for both Barcelona and Milan as described in points 2) and 3) were ready (uploaded in Esri ArcGIS, reprojected and their correctness checked), they have been ‘crossed’ (i.e. clipped) with the IMD HRL as described in point 1) and a specific raster for each territorial level has been calculated. The procedure in Esri ArcGIS was the following: * Clipping: Arctoolbox - Data management tools - Raster - Raster Processing - Clip. The ‘input’ file is the HRL IMD raster file as described in point 1) and the ‘output’ file is each of the spatial/territorial files. The option "Use Input Features for Clipping Geometry (optional)” was selected for each of the clipping. * Delete and create raster attribute table: Once the clipping has been done, the raster has to be recalculated first through Arctoolbox - Data management tools - Raster - Raster properties - Delete Raster Attribute Table and then through Arctoolbox - Data management tools - Raster - Raster properties - Build Raster Attribute Table; the "overwrite" option has been selected. -br /- -br /- Other tools used for the raster files in Esri ArcGIS have been the spatial analyst tools (in particular, Zonal - Zonal Statistics). As an additional check, the colour scheme of each of the newly created raster for each of the spatial/territorial attributes as per letters a)-h) above has been changed to check the consistency of its overlay with the original HRL IMD file. However, a perfect match between the shapefiles as per letters a)-h) and the raster files could not be achieved since the raster files are composed of 20x20mt cells.-br /- The newly created attribute tables of each of the raster files have been exported and saved as .txt files. These .txt files have then been copied in the excel corresponding to the final published dataset.
Facebook
TwitterThe LAS data set was originally classified according to 4 classes (ground, water, bridge overpass, and noise), with the rest of the data being unclassified. That left some classes to be derived and classified, of which one—the building/ structure class—was considered necessary for this project. In theory, deriving a building/structure layer is relatively straightforward: the building reflectance response should be unclassified, single-reflectance response points, whereas the vegetation, also unclassified, should yield a multiple-reflectance response as the beam bounces back through the canopy. Following this idea, we created a Digital Surface Model (DSM) from the single-response, unclassified LAS point cloud. We then subtracted these DSMs from the Bare Earth DEMs to create a difference image, which ideally should represent only buildings. Unfortunately, many trees were included in this “buildings” layer, due possibly to the sparse canopy that is characteristic of trees found in southwestern forests and possibly to the presence of fairly recent burn scars that include a number of standing dead trees and snags. In an attempt to remove the clutter of false positives due to trees, we developed a Normalized Difference Vegetation Index (NDVI) from the NAIP imagery acquired over the area in the same year. The NDVI is an image-processing technique that uses the reflective information found in the red (Red) and near-infrared (NIR) wavelengths to enhance the “green” vegetative response over other, non-vegetated surface features (Eq. 1). NDVI = (NIR−Red)/(NIR+Red) [Eq. 1]. This provides a floating-point image of values from -1 to 1, with numbers above 0 representing increasing vegetative cover. We further modified the NDVI equation to create an 8-bit image (Eq. 2). NDVImod = (NDVI+1)*100 [Eq. 2]. This 8-bit image had all positive integer values, where values above 100 indicated increasing vegetative cover. We used the generated NDVI image, in particular values above 109, to mask out many of the false anomalies. In addition, all heights less than 6 feet were masked out, as this was considered a minimum height for most buildings. We added 1 to values in the resulting image so that all values, even the zeroes, would be counted. Then values were clumped to produce an image of individually coded raster polygons. We eliminated all clusters smaller than 32 square meters (345 square feet) from the clumped image, ran a 3x3 majority filter to remove relict edges, and ran a 3x3 morphological close filter to remove holes in the raster polygons. We completed the raster processing in ERDAS IMAGINE and then converted the data set to a polygon layer in ESRI ArcGIS, as is and without using the ‘simplify polygon’ option. This was cleaned up further using the simplify buildings module with a minimum spacing of 2 meters. Once this was completed, the polygon layer was edited using the NAIP imagery and DSM Shaded Relief imagery as a background by a heads-up digitizing at a 1:3,000 scale (the approximate base resolution of the LiDAR data). The building/structure layer contained more than 44,612 identified structures.
Facebook
TwitterThe LAS data set was originally classified according to 4 classes (ground, water, bridge overpass, and noise), with the rest of the data being unclassified. That left some classes to be derived and classified, of which one—the building/ structure class—was considered necessary for this project. In theory, deriving a building/structure layer is relatively straightforward: the building reflectance response should be unclassified, single-reflectance response points, whereas the vegetation, also unclassified, should yield a multiple-reflectance response as the beam bounces back through the canopy. Following this idea, we created a Digital Surface Model (DSM) from the single-response, unclassified LAS point cloud. We then subtracted these DSMs from the Bare Earth DEMs to create a difference image, which ideally should represent only buildings. Unfortunately, many trees were included in this “buildings” layer, due possibly to the sparse canopy that is characteristic of trees found in southwestern forests and possibly to the presence of fairly recent burn scars that include a number of standing dead trees and snags. In an attempt to remove the clutter of false positives due to trees, we developed a Normalized Difference Vegetation Index (NDVI) from the NAIP imagery acquired over the area in the same year. The NDVI is an image-processing technique that uses the reflective information found in the red (Red) and near-infrared (NIR) wavelengths to enhance the “green” vegetative response over other, non-vegetated surface features (Eq. 1). NDVI = (NIR−Red)/(NIR+Red) [Eq. 1]. This provides a floating-point image of values from -1 to 1, with numbers above 0 representing increasing vegetative cover. We further modified the NDVI equation to create an 8-bit image (Eq. 2). NDVImod = (NDVI+1)*100 [Eq. 2]. This 8-bit image had all positive integer values, where values above 100 indicated increasing vegetative cover. We used the generated NDVI image, in particular values above 109, to mask out many of the false anomalies. In addition, all heights less than 6 feet were masked out, as this was considered a minimum height for most buildings. We added 1 to values in the resulting image so that all values, even the zeroes, would be counted. Then values were clumped to produce an image of individually coded raster polygons. We eliminated all clusters smaller than 32 square meters (345 square feet) from the clumped image, ran a 3x3 majority filter to remove relict edges, and ran a 3x3 morphological close filter to remove holes in the raster polygons. We completed the raster processing in ERDAS IMAGINE and then converted the data set to a polygon layer in ESRI ArcGIS, as is and without using the ‘simplify polygon’ option. This was cleaned up further using the simplify buildings module with a minimum spacing of 2 meters. Once this was completed, the polygon layer was edited using the NAIP imagery and DSM Shaded Relief imagery as a background by a heads-up digitizing at a 1:3,000 scale (the approximate base resolution of the LiDAR data). The building/structure layer contained more than 44,612 identified structures.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
San Joaquin Valley Subsidence Analysis README.
Written: Joel Dudas, 3/12/2017. Amended: Ben Brezing, 4/2/2019. DWR’s Division of Engineering Geodetic Branch received a request in 1/2017 from Jeanine Jones to produce a graphic of historic subsidence in the entirety of the San Joaquin Valley. The task was assigned to the Mapping & Photogrammetry Office and the Geospatial Data Support Section to complete by early February. After reviewing the alternatives, the decision was made to produce contours from the oldest available set of quad maps for which there was reasonable certainty about quality and datum, and to compare that to the most current Valley-wide DEM. For the first requirement, research indicated that the 1950’s vintage quad maps for the Valley were the best alternative. Prior quad map editions are uneven in quality and vintage, and the actual control used for the contour lines was extremely suspect. The 1950’s quads, by contrast, were produced primarily on the basis of 1948-1949 aerial photography, along with control corresponding to that period, and referenced to the National Geodetic Vertical Datum of 1929. For the current set, the most recent Valley-wide dataset that was freely available, in the public domain, and of reasonable accuracy was the 2005 NextMap SAR acquisition (referenced to NAVD88). The primary bulk of the work focused on digitizing the 1950’s contours. First, all of the necessary quads were downloaded from the online USGS quad source https://ngmdb.usgs.gov/maps/Topoview/viewer/#4/41.13/-107.51. Then the entire staff of the Mapping & Photogrammetry Lab (including both the Mapping Office and GDDS staff) proceeded to digitize the contours. Given the short turnaround time constraint and limited budget, certain shortcuts occurred in contour development. While efforts were made to digitize accurately, speed really was important. Contours were primarily focused only on agricultural and other lowland areas, and so highlands were by and large skipped. The tight details of contours along rivers, levees, and hillsides was skipped and/or simplified. In some cases, only major contours were digitized. The mapping on the source quads itself varied….in a few cases on spot elevations on benchmarks were available in quads. The contour interval sometimes varied, even within the quad sheet itself. In addition, because 8 different people were creating the contours, variability exists in the style and attention to detail. It should be understood that given the purpose of the project (display regional subsidence patterns), that literal and precise development of the historic contour sets leaves some things to be desired. These caveats being said, the linework is reasonably accurate for what it is (particularly given that the contours of that era themselves were mapped at an unknown and varying actual quality). The digitizers tagged the lines with Z values manually entered after linework that corresponded to the mapped elevation contours. Joel Dudas then did what could be called a “rough” QA/QC of the contours. The individual lines were stitched together into a single contour set, and exported to an elevation raster (using TopoToRaster in ArcGIS 10.4). Gross blunders in Z values were corrected. Gaps in the coverage were filled. The elevation grid was then adjusted to NAVD88 using a single adjustment for the entire coverage area (2.5’, which is a pretty close average of values in this region). The NextMap data was extracted for the area, and converted into feet. The two raster sets were fixed to the same origin point. The subsidence grid was then created by subtracting the old contour-derived grid from the NextMAP DEM. The subsidence grid that includes all of the values has the suffix “ALL”. Then, to improve the display fidelity, some of the extreme values (above +5’ and below -20’*) were filtered out of the dataset, and the subsidence grid was regenerated for these areas and suffixed with “cut.” The purpose of this cut was to extract some of the riverine and hilly areas that produced more extreme values and other artifacts purely due to the analysis approach (i.e. not actual real elevation change). * - some of the areas with more than 20 feet of subsidence were omitted from this clipping, because they were in heavily subsided areas and may be “real subsidence.”The resulting subsidence product should be perceived in light of the above. Some of the collar of the San Joaquin Valley shows large changes, but that is simply due to the analysis method. Also, individual grid cells may or may not be comparing the same real features. Errors are baked into both comparison datasets. However, it is important to note that the large areas of subsidence in the primary agriculture area agree fairly well with a cruder USGS subsidence map of the Valley based on extensometer data. We have confidence that the big picture story these results show us is largely correct, and that the magnitudes of subsidence are somewhat reasonable. The contour set can serve as the baseline to support future comparisons using more recent or future data as it becomes available. It should be noted there are two key versions of the data. The “Final Deliverables” from 2/2017 were delivered to support the initial Public Affairs press release. Subsequent improvements were made in coverage and blunder correction as time permitted (it should be noted this occurred in the midst of the Oroville Dam emergency) to produce the final as of 3/12/2017. Further improvements in overall quality and filtering could occur in the future if time and needs demand it.
Update (4/3/2019, Ben Brezing): The raster was further smoothed to remove artifacts that result from comparing the high resolution NextMAP DEM to the lower resolution DEM that was derived from the 1950’s quad map contours. The smoothing was accomplished by removing raster cells with values that are more than 0.5 feet different than adjacent cells (25 meter cell size), as well as the adjacent cells. The resulting raster was then resampled to a raster with 100 meter cell size using cubic resampling technique and was then converted to a point feature class. The point feature class was then interpolated to a raster with 250 meter cell size using the IDW technique, a fixed search radius of 1250 meters and power=2. The resulting raster was clipped to a smaller extent to remove noisier areas around the edges of the Central Valley while retaining coverage for the main area of interest.
Facebook
TwitterShrublands have seen large changes over time due to factors such as fire and drought. As the climate continues to change, vegetation monitoring at the county scale is essential to identify large-scale changes and to develop sampling designs for field-based vegetation studies. This dataset contains two raster files that each depict the height of vegetation. The first layer is restricted to actively growing vegetation and the second is restricted to dormant/dead vegetation. Both layers cover open space areas in San Diego County, California. Height calculations were derived from Lidar data collected in 2014 and 2015 for the western two-thirds of San Diego County. Lidar point clouds were pre-classified into ground and non-ground. Rasters for the Digital Elevation Model (DEM) and Digital Surface Model (DSM) were calculated using ArcGIS software using ground classified points and last returns for the natural surface (DEM) and non-ground first returns for the surface model (DSM). The spatial resolution for both layers is 1 meter and aligns with 2014 National Agriculture Imagery Program (NAIP) imagery. Object height was calculated by subtracting the DEM from the DSM in meters. To remove structures or non-natural objects from the imagery, layers were clipped to open space areas using the National Land Cover Database, building footprints, roads, and railways. This ensures that objects above the natural surface are vegetation, even when Normalized Difference Vegetation Index (NDVI) numbers are very low. NDVI measures the amount of photosynthetically active vegetation in the raster cell. Healthy vegetation reflects high levels of near-infrared and low levels of red electromagnetic radiation. NDVI ranges from -1 to 1 with low values indicating little or no presence of healthy vegetation and higher values indicating the presence of healthy vegetation. The NDVI was calculated from the 2014 NAIP imagery and a cutoff of 0.1 was used to separate photosynthetically active vegetation from non-vegetated or dormant/dead vegetation areas. The imagery was collected during 2014, an exceptional drought year. It is not possible to separate extremely water-stressed plants from truly dead plants using only NDVI. The natural surface was verified using established National Geodetic Survey (NGS) benchmarks and exceeded 98 percent accuracy. Vegetation structure was validated using visual assessments of high-resolution aerial imagery to verify the vegetation form and greenness. Vegetation form and health (NDVI) had an accuracy of 82 percent.
Facebook
TwitterThis personal geodatabase contains raster images of turbidity in the Gulf of Maine. These raster images are monthly composites, and were calculated as means or as medians. The image composites span from 9/1997 to 6/2005.These images were also reprocessed to remove land and no data values (value = 252 and value = 0, 253, 251 respectively), as well as to calculate the real world values for turbidity in inverse steradians.