Facebook
TwitterThe Woodflow Centroids Boundaries feature layer contains centroids and boundaries for North American states and provinces (USA, Canada, and Mexico). Also includes two international sites (east and west) for distribution outside of North America. The dataset was created to support the generation of flow lines (inflow and outflow) for timber production movement for the FIA BIGMAP Wood Flow Visualization web application.About FIA's BIGMAPThe USDA Forest Service’s Forest Inventory and Analysis (FIA) program is the authoritative source of information about the conditions of the Agency’s forested lands. Within the FIA program, a new secure, cloud-based, and flexible computing environment has been created, named the Big Data Mapping & Analytics Platform (BIGMAP). BIGMAP is designed to store, process, analyze, and deliver Forest Service content. It does so in ways that streamline our internal workflows and make it easy to share authoritative, map-based content through web technologies. BIGMAP leverages commercial off-the-shelf solutions, reducing development and maintenance costs over the longer term. This focus capitalizes upon Agency investments in FIA and other data. The resulting, authoritative map content will populate the Agency’s WebGIS library for use by Agency managers, decision-makers, and other interested parties.
Facebook
TwitterThe DVRPC Freight Rail Line data layer was developed to identify key ownership and operation details for freight rail lines in the Delaware Valley. This data set was developed internally using various rail mapping and track chart sources. The polyline network represents rights-of-way rather than track and is segmented based on subdivision or track name as well as key profile details. The line set has been developed with input from various Class I and short-line railroads in the region.
Facebook
TwitterGeneral information about NOAA-AVHRR can be queried by interested users in the category 'Sensor' and 'Source'. Some basic information is given hereafter.
The Advanced Very High Resolution Radiometer (AVHRR) onboard NOAA 6 and TIROS-N measured in four spectral bands, while the NOAA 7, 9 and 11 are measured in 5 bands. The primary objective of the AVHRR instrument is to provide cloud top and sea surface temperatures through passively measured visible, near infra-red and infra-red spectral radiation bands. Nevertheless these data are widely used for terrestrial applications, such as land cover mapping and vegetation monitoring.
The available data set provides a comprehensive time series of Sea Surface Temperature (SST) and different cloud parameters for the ocean surrounding the African continent derived from daytime NOAA GAC data. The total number of satellite passes is approximately 12000. The time period covered by the data set is from August 1981 to December 1992 with the intention to extend up to present (1995) as data will be purchased. Geographical coverage is from 45 S to 45 N and from 55 E to 30 W.
Initially, emphasis has been put on SST for studying coastal upwelling processes in the Northwest African and Benguela upwelling systems in continuation of the SAI/JRC on-going activity on coastal upwelling research, Nykjaer and Van Camp (1994). In parallel, different studies are carried out for establishing algorithms for cloud optical properties and their validation. The elaboration of this data set is carried out in the frame of the Cloud and Ocean Remote Sensing around Africa project (CORSA).
The five channels of the AVHRR are calibrated to 'top of atmosphere' reflectances for the channels 1 and 2 and brightness temperatures for channels 3, 4 and 5 following the recommendations of Kidwell (1991). Clouds over the ocean are identified using a modified Saunders and Kriebel (1988) approach. For cloud free areas SST is calculated using a classical 'split-window' algorithm from Castagne et al. (1986).
After SST the individual images are resampled into weekly and monthly composites maintaining the original 4 km resolution. Validation of the SST fields are done by comparing to lower resolution data sets such as Comprehensive Ocean- Atmosphere Data Set (COADS), National Meteorological Center SST Data (IGOSS) and Global Ocean Surface Temperature Atlas (GOSTA).
To accomodate the need for the reprocessing the data as algorithms evolve, all raw data have been written to an on-line optical file server system. In the processing, intermediate products are not stored but only the weekly and monthly composites are retained. This reduces the disk storage requirements for the user without sacrificing computational speed. Weekly and monthly composites of SST for one month, ca. 100 satellite passes, are generated in approximately 5 hours on a SUN Sparc 10 workstation.
The weekly and monthly composites are stored on the on-line optical server and made available through collaborative agreements with the Joint Research Centre.
Example SST time series can be found on the CEO World Wide Web home page: "http://www.ceo.org/".
Facebook
TwitterThis dataset corresponds to land area polygons of Australian coastline and surrounding islands. It was generated from 10 m Sentinel 2 imagery from 2022 - 2024 using the Normalized Difference Water Index (NDWI) to distinguish land from water. It was estimated from composite imagery made up from images where the tide is above the mean sea level. The coastline approximately corresponds to the mean high water level.
This dataset was created as part of the NESP MaC 3.17 northern Australian Reef mapping project. It was developed to allow the inshore edge of digitised fringing reef features to be neatly clipped to the land areas without requiring manual digitisation of the neighbouring coastline. This required a coastline polygon with an edge positional error of below 50 m so as to not distort the shape of small fringing reefs.
We found that existing coastline datasets such as the Geodata Coast 100K 2004 and the Australian Hydrographic Office (AHO) Australian land and coastline dataset did not meet our needs. The scale of the Geodata Coast 100K 2004 was too coarse to represent small islands and the the positional error of the Australian Hydrographic Office (AHO) Australian land and coastline dataset was too high (typically 80 m) for our application as the errors would have introduced significant errors in the shape of small fringing reefs. The Digital Earth Australia Coastline (GA) dataset was sufficiently accurate and detailed however the format of the data was unsuitable for our application as the coast was expressed as disconnected line features between rivers, rather than a closed polygon of the land areas.
We did however base our approach on the process developed for the DEA coastline described in Bishop-Taylor et al., 2021 (https://doi.org/10.1016/j.rse.2021.112734). Adapting it to our existing Sentinel 2 Google Earth processing pipeline. The difference between the approach used for the DEA coastline and this dataset was the DEA coastline performed the tidal calculations and filtering at the pixel level, where as in this dataset we only estimated a single tidal level for each whole Sentinel image scene. This was done for computational simplicity and to align with our existing Google Earth Engine image processing code. The images in the stack were sorted by this tidal estimate and those with a tidal high greater than the mean seal level were combined into the composite.
The Sentinel 2 satellite follows a sun synchronous orbit and so does not observe the full range of tidal levels. This observed tidal range varies spatially due to the relative timing of peak tides with satellite image timing. We made no accommodation for variation in the tidal levels of the images used to calculate the coastline, other than selecting images that were above the mean tide level. This means tidal height that the dataset coastline corresponds to will vary spatially. While this approach is less precise than that used in the DEA Coastline the resulting errors were sufficiently low to meet the project goals.
This simplified approach was chosen because it integrated well with our existing Sentinel 2 processing pipeline for generating composite imagery.
To verify the accuracy of this dataset we manually checked the generated coastline with high resolution imagery (ArcGIS World Imagery). We found that 90% of the coastline polygons in this dataset have a horizontal position error of less than 20 m when compared to high-resolution imagery, except for isolated failure cases.
During our manual checks we identified some areas where our algorithm can lead to falsely identifying land or not identifying land. We identified specific scenarios, or 'failure modes,' where our algorithm struggled to distinguish between land and water. These are shown in the image "Potential failure modes":
a) The coastline is pushed out due to breaking waves (example: western coast, S2 tile ID 49KPG).
b) False land polygons are created because of very turbid water due to suspended sediment. In clear water areas the near infrared channel is almost black, starkly different to the bright land areas. In very highly turbid waters the suspended sediment appears in the near infrared channel, raising its brightness to a level where it starts to overlap with the brightness of the dimmest land features. (example: Joseph Bonaparte Gulf, S2 tile ID 52LEJ). This results in turbid rivers not being correctly mapped. In version 1-1 of the dataset the rivers across northern Australia were manually corrected for these failures.
c) Very shallow, gentle sloping areas are not recognised as water and the coastline is pushed out (example: Mornington Island, S2 tile ID 54KUG). Update: A second review of this area indicated that the mapped coastline is likely to be very close to the try coastline.
d) The coastline is lower than the mean high water level (example: Great Keppel (Wop-pa) Island, S2 tile ID 55KHQ).
Some of these potential failure modes could probably be addressed in the future by using a higher resolution tide calculation and using adjusted NDWI thresholds per region to accommodate for regional differences. Some of these failure modes are likely due to the near infrared channel (B8) being able to penetrate the water approximately 0.5 m leading to errors in very shallow areas.
Some additional failures include:
- Interpreting jetties as land
- Interpreting oil rigs as land
- Bridges being interpreted as land, cutting off rivers
Methods:
The coastline polygons were created in four separate steps:
1. Create above mean sea level (AMSL) composite images.
2. Calculate the Normalized Difference Water Index (NDWI) and visualise as a grey scale image.
3. Generate vector polygons from the grey scale image using a NDWI threshold.
4. Clean up and merge polygons.
To create the AMSL composite images, multiple Sentinel 2 images were combined using the Google Earth Engine. The core algorithm was:
1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by
- tile ID
- maximum cloud cover 20%
- date between '2022-01-01' and '2024-06-30'
- asset_size > 100000000 (remove small fragments of tiles)
2. Remove high sun-glint images (see "High sun-glint image detection" for more information).
3. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information).
4. Iterate over all images in the split collections to predict the tide elevation for each image from the image timestamp (see "Tide prediction" for more information).
5. Remove images where tide elevation is below mean sea level.
6. Select maximum of 200 images with AMSL tide elevation.
7. Combine SENSING_ORBIT_NUMBER collections into one image collection.
8. Remove sun-glint and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information).
9. Duplicate image collection to first create a composite image without cloud masking and using the 15th percentile of the images in the collection (i.e. for each pixel the 15th percentile value of all images is used).
10. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 15th percentile of the images in the collection (i.e. for each pixel the 15th percentile value of all images is used).
11. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022)
Next, for each image the NDWI was calculated:
1. Calculate the normalised difference using the B3 (green) and B8 (near infrared).
2. Shift the value range from between -1 and +1 to values between 1 and 255 (0 reserved as no-data value).
3. Export image as 8 bit unsigned Integer grey scale image.
During the next step, we generated vector polygons from the grey scale image using a NDWI threshold:
1. Upscale image to 5 m resolution using bilinear interpolation. This was to help smooth the coastline and reduce the error introduced by the jagged pixel edges.
2. Apply a threshold to create a binary image (see "NDWI Threshold" for more information) with the value 1 for land and 2 for water (0: no data).
3. Create polygons for land values (1) in the binary image.
4. Export as shapefile.
Finally, we created a single layer from the vectorised images:
1. Merge and dissolve all vector layers in QGIS.
2. Perform smoothing (QGIS toolbox, Iterations 1, Offset 0.25, Maximum node angle to smooth 180).
3. Perform simplification (QGIS toolbox, tolerance 0.00003).
4. Remove polygon vertices on the inner circle to fill out the continental Australia.
5. Perform manual QA/QC. In this step we removed false polygons created due to sun glint and breaking waves. We also removed very small features (1 – 1.5 pixel sized features, e.g. single mangrove trees) by calculating the area of each feature (in m2) and removing features smaller than 200 m2.
15th percentile composite:
The composite image was created using the 15th percentile of the pixels values in the image stack. The 15th percentile was chosen, in preference to the median, to select darker pixels in the stack as these tend to correspond to images with clearer water conditions and higher tides.
High sun-glint image detection:
Images with high sun-glint can lead to lower quality composite images. To determine high sun-glint images, a land mask was first applied to the image to only retain water pixels. This land mask was estimated using NDWI. The proportion of the water pixels in the near-infrared and short-wave infrared bands above a sun-glint threshold was calculated. Images with a high proportion were then filtered out of the image collection.
Sun-glint removal and atmospheric correction:
The Top of Atmosphere L1
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset collection contains A0 maps of the Keppel Island region based on satellite imagery and fine-scale habitat mapping of the islands and marine environment. This collection provides the source satellite imagery used to produce these maps and the habitat mapping data.
The imagery used to produce these maps was developed by blending high-resolution imagery (1 m) from ArcGIS Online with a clear-sky composite derived from Sentinel 2 imagery (10 m). The Sentinel 2 imagery was used to achieve full coverage of the entire region, while the high-resolution was used to provide detail around island areas.
The blended imagery is a derivative product of the Sentinel 2 imagery and ArcGIS Online imagery, using Photoshop to to manually blend the best portions of each imagery into the final product. The imagery is provided for the sole purpose of reproducing the A0 maps.
Methods:
The high resolution satellite composite composite was developed by manual masking and blending of a Sentinel 2 composite image and high resolution imagery from ArcGIS Online World Imagery (2019).
The Sentinel 2 composite was produced by statistically combining the clearest 10 images from 2016 - 2019. These images were manually chosen based on their very low cloud cover, lack of sun glint and clear water conditions. These images were then combined together to remove clouds and reduce the noise in the image.
The processing of the images was performed using a script in Google Earth Engine. The script combines the manually chosen imagery to estimate the clearest imagery. The dates of the images were chosen using the EOBrowser (https://www.sentinel-hub.com/explore/eobrowser) to preview all the Sentinel 2 imagery from 2015-2019. The images that were mostly free of clouds, with little or no sun glint, were recorded. Each of these dates was then viewed in Google Earth Engine with high contrast settings to identify images that had high water surface noise due to algal blooms, waves, or re-suspension. These were excluded from the list. All the images were then combined by applying a histogram analysis of each pixel, with the final image using the 40th percentile of the time series of the brightness of each pixel. This approach helps exclude effects from clouds.
The contrast of the image was stretched to highlight the marine features, whilst retaining detail in the land features. This was done by choosing a black point for each channel that would provide a dark setting for deep clear water. Gamma correction was then used to lighten up the dark water features, whilst not ove- exposing the brighter shallow areas.
Both the high resolution satellite imagery and Sentinel 2 imagery was combined at 1 m pixel resolution. The resolution of the Sentinel 2 tiles was up sampled to match the resolution of the high-resolution imagery. These two sets of imagery were then layered in Photoshop. The brightness of the high-resolution satellite imagery was then adjusting to match the Sentinel 2 imagery. A mask was then used to retain and blend the imagery that showed the best detail of each area. The blended tiles were then merged with the overall area imagery by performing a GDAL merge, resulting in an upscaling of the Sentinel 2 imagery to 1 m resolution.
Habitat Mapping:
A 5 m resolution habitat mapping was developed based on the satellite imagery, aerial imagery available, and monitoring site information. This habitat mapping was developed to help with monitoring site selection and for the mapping workshop with the Woppaburra TOs on North Keppel Island in Dec 2019.
The habitat maps should be considered as draft as they don't consider all available in water observations. They are primarily based on aerial and satellite images.
The habitat mapping includes: Asphalt, Buildings, Mangrove, Cabbage-tree palm, Sheoak, Other vegetation, Grass, Salt Flat, Rock, Beach Rock, Gravel, Coral, Sparse coral, Unknown not rock (macroalgae on rubble), Marine feature (rock).
This assumed layers allowed the digitisation of these features to be sped up, so for example, if there was coral growing over a marine feature then the boundary of the marine feature would need to be digitised, then the coral feature, but not the boundary between the marine feature and the coral. We knew that the coral was going to cut out from the marine feature because the coral is on top of the marine feature, saving us time in digitising this boundary. Digitisation was performed on an iPad using Procreate software and an Apple pencil to draw the features as layers in a drawing. Due to memory limitations of the iPad the region was digitised using 6000x6000 pixel tiles. The raster images were converted back to polygons and the tiles merged together.
A python script was then used to clip the layer sandwich so that there is no overlap between feature types.
Habitat Validation:
Only limited validation was performed on the habitat map. To assist in the development of the habitat mapping, nearly every YouTube video available, at the time of development (2019), on the Keppel Islands was reviewed and, where possible, georeferenced to provide a better understanding of the local habitats at the scale of the mapping, prior to the mapping being conducted. Several validation points were observed during the workshop. The map should be considered as largely unvalidated.
data/coastline/Keppels_AIMS_Coastline_2017.shp:
The coastline dataset was produced by starting with the Queensland coastline dataset by DNRME (Downloaded from http://qldspatial.information.qld.gov.au/catalogue/custom/detail.page?fid={369DF13C-1BF3-45EA-9B2B-0FA785397B34} on 31 Aug 2019). This was then edited to work at a scale of 1:5000, using the aerial imagery from Queensland Globe as a reference and a high-tide satellite image from 22 Feb 2015 from Google Earth Pro. The perimeter of each island was redrawn. This line feature was then converted to a polygon using the "Lines to Polygon" QGIS tool. The Keppel island features were then saved to a shapefile by exporting with a limited extent.
data/labels/Keppel-Is-Map-Labels.shp:
This contains 70 named places in the Keppel island region. These names were sourced from literature and existing maps. Unfortunately, no provenance of the names was recorded. These names are not official. This includes the following attributes:
- Name: Name of the location. Examples Bald, Bluff
- NameSuffix: End of the name which is often a description of the feature type: Examples: Rock, Point
- TradName: Traditional name of the location
- Scale: Map scale where the label should be displayed.
data/lat/Keppel-Is-Sentinel2-2016-19_B4-LAT_Poly3m_V3.shp:
This corresponds to a rough estimate of the LAT contours around the Keppel Islands. LAT was estimated from tidal differences in Sentinel-2 imagery and light penetration in the red channel. Note this is not very calibrated and should be used as a rough guide. Only one rough in-situ validation was performed at low tide on Ko-no-mie at the edge of the reef near the education centre. This indicated that the LAT estimate was within a depth error range of about +-0.5 m.
data/habitat/Keppels_AIMS_Habitat-mapping_2019.shp:
This shapefile contains the mapped land and marine habitats. The classification type is recorded in the Type attribute.
Format:
GeoTiff (Internal JPEG format - 538 MB)
PDF (A0 regional maps - ~30MB each)
Shapefile (Habitat map, Coastline, Labels, LAT estimate)
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Keppels_AIMS_Regional-maps
Facebook
TwitterSingle photon light detection and ranging (SPL LiDAR) is an active remote sensing technology for:
mapping vegetation aspects including cover, density and height representing the earth's terrain and elevation contours
We acquired SPL data on an airborne acquisition platform under leaf-on conditions to support Forest Resources Inventory (FRI) development.
FRI provides:
information to support resource management planning and land use decisions within Ontario’s Managed Forest Zone information on tree species, density, heights, ages and distribution
The SPL data point density ranges from a min of 25pts/m. Each point represents heights of objects such as:
ground level terrain points heights of vegetation buildings
The LiDAR was classified according to the Ontario LiDAR classifications. Low, medium and tall vegetation are classed as 3, 4, 5 and 12 classes.
The FRI SPL products include the following digital elevation models:
digital terrain model canopy height model digital surface model intensity model (signal width to return ratio) forest inventory raster metrics forest inventory attributes predicted streams hydro break lines block control points
LiDAR fMVA data supports developing detailed 3D analysis of:
forest inventory terrain hydrology infrastructure transportation
We made significant investments in Single Photon LiDAR data, now available on the Open Data Catalogue Derivatives are available for streaming or through download.
The map reflects areas with LiDAR data available for download. Zoom in to see data tiles and download options. Select individual tiles to download the data.
You can download:
classified point cloud data can also be downloaded via .laz format derivatives in a compressed .tiff format Forest Resource Inventory leaf-on LiDAR Tile Index (Download: Shapefile | File Geodatabase | GeoPackage )
Web raster services
You can access the data through our web raster services. For more information and tutorials, read the Ontario Web Raster Services User Guide.
If you have questions about how to use the Web raster services, email Geospatial Ontario (GEO) at geospatial@ontario.ca.
Note: Internal Users replace “https://ws.” with “https://intra.ws.”
CHM - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/FRI_CHM_SPL/ImageServer DSM - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/FRI_DSM_SPL/ImageServer DTM - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/FRI_DTM_SPL/ImageServer T1 Imagery - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/AerialImagery/FRI_Imagery_T1/ImageServer T2 Imagery - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/AerialImagery/FRI_T2_Imagery/ImageServer Land Cover - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Thematic/Ontario_Land_Cover_Compilation_v2/ImageServer
Service Endpoint
https://services1.arcgis.com/TJH5KDher0W13Kgo/arcgis/rest/services/FRI_Data_Access/FeatureServer
Additional Documentation
Forest Resources Inventory | ontario.ca
Status
On going: data is being continually updated
Maintenance and Update Frequency
As needed: data is updated as deemed necessary
Contact
Natural Resources Information Unit, Forest Resources Inventory Program, FRI@ontario.ca
Facebook
TwitterThe Ancient Woodland Inventory identifies over 52,000 ancient woodland sites in England. Ancient woodland is identified using presence or absence of woods from old maps, information about the wood's name, shape, internal boundaries, location relative to other features, ground survey, and aerial photography. The information recorded about each wood and stored on the Inventory Database includes its grid reference, its area in hectares and how much is semi-natural or replanted. Guidance document can be found on our Amazon Cloud Service Prior to the digitisation of the boundaries, only paper maps depicting each ancient wood at 1:50 000 scale were available.Full metadata can be viewed on data.gov.uk.
Facebook
TwitterThese DEMs were produced from digitized contours at a cell resolution of 100 meters. Vector contours of the area were used as input to a software package that interpolates between contours to create a DEM representing the terrain surface. The vector contours had a contour interval of 25 feet. The data cover the BOREAS MSAs of the SSA and NSA and are given in a UTM map projection.
Facebook
TwitterWe combined a detailed field study of canopy gap fraction with spectral mixture analysis of Landsat ETM+ satellite imagery to assess landscape and regional dynamics of canopy damage following selective logging in an eastern Amazon forest. Our field studies encompassed measurements of ground damage and canopy gap fractions along multitemporal sequences of post-harvest regrowth of 0.5-3.5 yr. Areas used to stage harvested logs prior to transport, called log decks, had the largest forest gap fractions, but their contribution to the landscape-level gap dynamics was minor. Tree falls were spatially the most extensive form of canopy damage following selective logging, but the canopy gap fractions resulting from them were small. Reduced-impact logging resulted in consistently less damage to the forest canopy than did conventional logging practices. This was true at the level of individual landscape strata such as roads, skids, and tree falls as well as at the area-integrated scale. A spectral mixture model was employed that utilizes bundles of field and image spectral reflectance measurements with a Monte Carlo analysis to estimate high spatial resolution (subpixel) cover of forest canopies, exposed nonphotosynthetic vegetation, and soils in the Landsat imagery. The method proved highly useful for quantifying forest canopy cover fraction in the log decks, roads, skids, tree fall, and intact forest areas, and it tracked caopy damage up to 3.5 yr post-harvest. Forest canopy cover fractions derived from satellite observations were highly and inversely correlated with field- and satellite-based measurements. A 450-km^2 study of gap fraction showed that approximately one-half of the canopy opening caused by logging is closed within one year of regrowth following timber harvests. This is the first regional-scale study utilizing field measurements, satellite observations, and models to quantify forest canopy damage and recovery following selective logging in the Amazon.
Facebook
TwitterThe Wood Retained All Years MCF feature layer contains retained roundwood totals for states across individual years and a total for all years. Roundwood volume (referred to as receipts in the dataset) is reported in thousand cubic feet (MCF). This feature layer was created for the Wood Flow Visualization web application and is referenced in:Wood Flow Radials Web MapWood Flow Radials DashboardCurrently, the dashboard contains data for the Southern Research Station (SRS). Data from other research stations will be added in the coming months.About FIA's BIGMAPThe USDA Forest Service’s Forest Inventory and Analysis (FIA) program is the authoritative source of information about the conditions of the Agency’s forested lands. Within the FIA program, a new secure, cloud-based, and flexible computing environment has been created, named the Big Data Mapping & Analytics Platform (BIGMAP). BIGMAP is designed to store, process, analyze, and deliver Forest Service content. It does so in ways that streamline our internal workflows and make it easy to share authoritative, map-based content through web technologies. BIGMAP leverages commercial off-the-shelf solutions, reducing development and maintenance costs over the longer term. This focus capitalizes upon Agency investments in FIA and other data. The resulting, authoritative map content will populate the Agency’s WebGIS library for use by Agency managers, decision-makers, and other interested parties.
Facebook
TwitterThe PALEOMAP project produces paleogreographic maps illustrating the Earth's plate tectonic, paleogeographic, climatic, oceanographic and biogeographic development from the Precambrian to the Modern World and beyond.
A series of digital data sets has been produced consisting of plate tectonic data, climatically sensitive lithofacies, and biogeographic data. Software has been devloped to plot maps using the PALEOMAP plate tectonic model and digital geographic data sets: PGIS/Mac, Plate Tracker for Windows 95, Paleocontinental Mapper and Editor (PCME), Earth System History GIS (ESH-GIS), PaleoGIS(uses ArcView), and PALEOMAPPER.
Teaching materials for educators including atlases, slide sets, VHS animations, JPEG images and CD-ROM digital images.
Some PALEOMAP products include: Plate Tectonic Computer Animation (VHS) illustrating motions of the continents during the last 850 million years.
Paleogeographic Atlas consisting of 20 full color paleogeographic maps. (Scotese, 1997).
Paleogeographic Atlas Slide Set (35mm)
Paleogeographic Digital Images (JPEG, PC/Mac diskettes)
Paleogeographic Digital Image Archive (EPS, PC/Mac Zip disk) consists of the complete digital archive of original digital graphic files used to produce plate tectonic and paleographic maps for the Paleographic Atlas.
GIS software such as PaleoGIS and ESH-GIS.
Facebook
Twitter[From The Landmap Project: Introduction, "http://www.landmap.ac.uk/background/intro.html"]
A joint project to provide orthorectified satellite image mosaics of Landsat,
SPOT and ERS radar data and a high resolution Digital Elevation Model for the
whole of the UK. These data will be in a form which can easily be merged with
other data, such as road networks, so that any user can quickly produce a
precise map of their area of interest.
Predominately aimed at the UK academic and educational sectors these data and
software are held online at the Manchester University super computer facility
where users can either process the data remotely or download it to their local
network.
Please follow the links to the left for more information about the project or
how to obtain data or access to the radar processing system at MIMAS. Please
also refer to the MIMAS spatial-side website,
"http://www.mimas.ac.uk/spatial/", for related remote sensing materials.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The Woodflow MCF feature layer contains annual timber production volumes moving across the United States (between states) and some international locations. The data includes state, year, wood class (hardwood and softwood) and product. Roundwood volume (referred to as receipts in the dataset) is reported in thousand cubic feet (MCF). This feature layer was created for the Wood Flow Visualization web application and is referenced in:Wood Flow Details Web MapWood Flow Details DashboardCurrently, the dashboard contains data for the Southern Research Station (SRS). Data from other research stations will be added in the coming months.About FIA's BIGMAPThe USDA Forest Service’s Forest Inventory and Analysis (FIA) program is the authoritative source of information about the conditions of the Agency’s forested lands. Within the FIA program, a new secure, cloud-based, and flexible computing environment has been created, named the Big Data Mapping & Analytics Platform (BIGMAP). BIGMAP is designed to store, process, analyze, and deliver Forest Service content. It does so in ways that streamline our internal workflows and make it easy to share authoritative, map-based content through web technologies. BIGMAP leverages commercial off-the-shelf solutions, reducing development and maintenance costs over the longer term. This focus capitalizes upon Agency investments in FIA and other data. The resulting, authoritative map content will populate the Agency’s WebGIS library for use by Agency managers, decision-makers, and other interested parties.
Facebook
Twitterhttps://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
Zoom in on the map above and click your area of interest or use the Tile Index linked below to determine which package(s) you require for download. The Digital Elevation Models (DEM) are 2-m resolution raster elevation products that were generated from the Ontario Classified Point Cloud (Imagery-Derived) data. The point clouds were created via a pixel-autocorrelation process from the stereo aerial photography of the Land Information Ontario (LIO) imagery program. It is important to note that the DEM does not represent a full ‘bare-earth’ elevation surface. There are areas where there are very few points classified as Ground and interpolation has occurred across the resulting voids. Points classified as Ground have not been assessed for accuracy to determine if they represent true ground features. Some features are still raised above ground surface, such as larger buildings, larger forest stands and other raised features.
For more detailed information about this dataset, refer to the associated User Guide.
Now also available through a web service which exposes the data for visualization, geoprocessing and limited download.
The service is best accessed through the ArcGIS REST API, either directly or by setting up an ArcGIS server connection using the REST endpoint URL. The service draws using the Web Mercator projection.
For more information on what functionality is available and how to work with the service, read the Ontario Web Raster Services User Guide. If you have questions about how to use the service, email Geospatial Ontario (GEO) at geospatial@ontario.ca.
Service Endpoints https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/Ontario_DEM_ImageryDerived/ImageServer https://intra.ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/Ontario_DEM_ImageryDerived/ImageServer (Government of Ontario Internal Users)
Additional Documentation
Ontario DEM (Imagery-Derived) - User Guide (DOCX)Ontario DEM (Imagery-Derived) - Tile Index (SHP)SCOOP 2013 - Vertical Accuracy Assessment (Word) SCOOP 2013 - Vertical Accuracy Assessment - Data (SHP)
Product Packages
SCOOP 2013 DEM Package A (IMG) SCOOP 2013 DEM Package B (IMG) SCOOP 2013 DEM Package C (IMG) SCOOP 2013 DEM Package D (IMG) SCOOP 2013 DEM Package E (IMG) SCOOP 2013 DEM Package F (IMG) SCOOP 2013 DEM Package G (IMG) SCOOP 2013 DEM Package H (IMG)
DRAPE 2014 DEM Package A (IMG) DRAPE 2014 DEM Package B (IMG) DRAPE 2014 DEM Package C (IMG) DRAPE 2014 DEM Package D (IMG) DRAPE 2014 DEM Package E (IMG) DRAPE 2014 DEM Package F (IMG) DRAPE 2014 DEM Package G (IMG) DRAPE 2014 DEM Package H (IMG) DRAPE 2014 DEM Package I (IMG)
Algonquin 2015 DEM Package (IMG)
SWOOP 2015 DEM Package A (IMG) SWOOP 2015 DEM Package B (IMG) SWOOP 2015 DEM Package C (IMG) SWOOP 2015 DEM Package D (IMG) SWOOP 2015 DEM Package E (IMG) SWOOP 2015 DEM Package F (IMG) SWOOP 2015 DEM Package G (IMG) SWOOP 2015 DEM Package H (IMG)
COOP 2016 DEM Package A (IMG) COOP 2016 DEM Package B (IMG) COOP 2016 DEM Package C (IMG) COOP 2016 DEM Package D (IMG) COOP 2016 DEM Package E (IMG) COOP 2016 DEM Package F (IMG) COOP 2016 DEM Package G (IMG) COOP 2016 DEM Package H (IMG) COOP 2016 DEM Package I (IMG)
NWOOP 2017 DEM Package A (IMG) NWOOP 2017 DEM Package B (IMG) NWOOP 2017 DEM Package C (IMG) NWOOP 2017 DEM Package D (IMG) NWOOP 2017 DEM Package E (IMG) NWOOP 2017 DEM Package F (IMG)
Status On going: Data is continually being updated
Maintenance and Update Frequency As needed: Data is updated as deemed necessary
Contact Ministry of Natural Resources - Geospatial Ontario, geospatial@ontario.ca
Facebook
TwitterThis data set is a condensed forest cover type digital map of Saskatchewan and is a product of the Saskatchewan Environment and Resource Management, Forestry Branch - Inventory Unit (SERM-FBIU). This map was generalized from SERM township maps of vegetation cover at an approximate scale of 1:63,000 (1 in. = 1 mile). The cover information was iteratively generalized until it was compiled on a 1:1,000,000 scale map base. This data set was prepared by SERM-FBIU. The data is a condensed forest cover type map of Saskatchewan at a scale of 1:1,000,000.
Facebook
TwitterImproved vegetation distribution and emission data for Africa south of the equator were developed for the Southern African Regional Science Initiative (SAFARI 2000) and combined with biogenic volatile organic compound (BVOC) emission measurements to estimate BVOC emissions for the southern African region. BVOC emissions were estimated for southern Africa on a monthly basis over a one-year period by combining GIS layers of vegetation, LAI, and climate with a biogenic emissions model, GLOBEIS (Guenther et al, 1993; Guenther, 1999). Model input data included: vegetation data (Rutherford et al., 2000); species emission capacity data (Greenberg et al., 2003; Guenther et al., 1996; Harley et al., 2003; Klinger et al., 1998; Otter et al., 2002; Serca et al., 2001; Wiedinmyer et al., 2004); LAI data (1987-88 ISLSCP LAI; Sellers et al., 1994); cloud cover (MODIS LAI cloud mask); and temperature data (NOAA NCDC data).Model output includes emissions estimates for isoprene, light-dependent monoterpene, stored monoterpene, and other volatile organic compounds by land cover category and by vegetation type (g C m-2 mo-1). Emissions were modeled for a summer (January) and a winter (July) month in 2001. Monthly and annual total emissions per constituent for the year 2001 were also calculated. The data files containing the model outputs are ASCII comma-delimited files. Graphics (.jpgs) included with this data set show the distribution of light dependent monoterpene emissions across southern Africa during January, the average monthly isoprene emissions over southern Africa in January and in July, and the average monthly stored monoterpene emissions over southern Africa in January and in July.
Facebook
TwitterThe ECMWF Re-Analysis (ERA40) is a global atmospheric analysis of many conventional observations and satellite data streams for the period Sept,1957- Aug,2002. There are numerous data products that are separated into dataset series' based on resolution, vertical coordinate reference, and likely research applications. Descriptions of the series organization and direct links to information about all ERA40 products are available [https://rda.ucar.educgi-bin/joey/era40sum.pl?ds=ds118.1]. This dataset contains monthly means for 9 variables on the original 60 model levels, as well as surface geopotential and log of surface pressure. Variables are archived at T159 spectral resolution in spherical harmonics or the reduced n80 gaussian grid with a resolution of approximately 125 kilometers. All variables are output 4x a day for the entire period. Monthly means have been computed for each analysis hour as well as a daily mean. The ERA-Interim data from ECMWF is an update to the ERA-40 project. The ERA-Interim data starts in 1989 and has a higher horizontal resolution (T255, N128 nominally 0.703125 degrees) than the ERA-40 data (T159, N80 nominally 1.125 degrees). ERA-Interim is based on a more current model than ERA-40 and uses 4-D VAR (as apposed to 3-D VAR in ERA-40). ECMWF will continue to run the ERA-Interim model in near real time thru at least 2010, and possibly longer. This data is available in ds627.0 [https://rda.ucar.edu/datasets/ds627.0/].
Facebook
TwitterThis dataset shows the extent of peak overland flooding in the Red River Valley in 2011. Data is based on RADARSAT – 1 satellite imagery. During processing, the raw dataset was resampled to 12.5 meter pixel resolution, then classified using PCI Geomatica software which is a specialized software designed to manipulate space born imagery. The final output depicting the flooding boundary is available as a TIFF or Shapefile.Launched in November 1995, RADARSAT-1 was a Canadian-led project which provided useful information to both commercial and scientific users in such fields as disaster management, agriculture, cartography, hydrology, forestry, oceanography, ice studies and coastal monitoring. Equipped with a powerful synthetic aperture radar (SAR) instrument, it acquired images of the Earth day or night, in all weather and through cloud cover, smoke and haze. As of March 2013, the satellite was declared non-operational and is no longer collecting data.Many applications were developed to take advantage of RADARSAT-1 capacity for detecting the presence of water. These included monitoring flooding and the build-up of river ice, and mapping the melting of snow-covered areas. When used for flood monitoring, RADARSAT-1 data helped assess the impact of flooding, predicted the extent and duration of floodwaters, analyzed the environmental impact of water diversion projects, and developed flood mitigation measures.Fields Included:FID: Internal feature numberNAME: Flooded area nameAREA_SQKM: Size of flooded area
Facebook
TwitterThe ECMWF Re-Analysis (ERA40) is a global atmospheric analysis of many conventional observations and satellite data streams for the period September 1957 to August 2002. There are numerous data products that are separated into dataset series' based on resolution, vertical coordinate reference, and likely research applications. Descriptions of the series organization and direct links to information about all ERA40 products are available [https://rda.ucar.educgi-bin/joey/era40sum.pl?ds=ds120.0]. This dataset contains monthly means of the ERA40 2.5 degree latitude-longitude gridded surface and single level analysis. Monthly means have been computed for each analysis hour as well as a daily mean. The ERA-Interim data from ECMWF is an update to the ERA-40 project. The ERA-Interim data starts in 1989 and has a higher horizontal resolution (T255, N128 nominally 0.703125 degrees) than the ERA-40 data (T159, N80 nominally 1.125 degrees). ERA-Interim is based on a more current model than ERA-40 and uses 4DVAR (as opposed to 3DVAR in ERA-40). ECMWF will continue to run the ERA-Interim model in near real time through at least 2010, and possibly longer. This data is available in ds627.0 [https://rda.ucar.edu/datasets/ds627.0/].
Facebook
TwitterThis data set provides mapped estimates of the stand age of young (less than 25 years old) larch forests across Siberia from 1989-2012 at 30-m resolution. The age estimates were derived from Landsat-based composites and tree cover for years 2000 and 2012 developed by the Global Forest Change (GFC) project and the stand-replacing fire mapping (SRFM) data set. This approach is based on the assumption that the relationship between the spectral signature of a burned or unburned forest stand acquired by Landsat ETM+ and TM sensors and stand age before and after the year 2000 is similar, thus allowing for training an algorithm on the data from the post-2000 era and applying the algorithm to infer stand age for the pre-2000 era. The output map combines the modeled forest disturbances before 2000 and direct observations of forest loss after 2000 to deliver a 24-year stand age distribution map.
Facebook
TwitterThe Woodflow Centroids Boundaries feature layer contains centroids and boundaries for North American states and provinces (USA, Canada, and Mexico). Also includes two international sites (east and west) for distribution outside of North America. The dataset was created to support the generation of flow lines (inflow and outflow) for timber production movement for the FIA BIGMAP Wood Flow Visualization web application.About FIA's BIGMAPThe USDA Forest Service’s Forest Inventory and Analysis (FIA) program is the authoritative source of information about the conditions of the Agency’s forested lands. Within the FIA program, a new secure, cloud-based, and flexible computing environment has been created, named the Big Data Mapping & Analytics Platform (BIGMAP). BIGMAP is designed to store, process, analyze, and deliver Forest Service content. It does so in ways that streamline our internal workflows and make it easy to share authoritative, map-based content through web technologies. BIGMAP leverages commercial off-the-shelf solutions, reducing development and maintenance costs over the longer term. This focus capitalizes upon Agency investments in FIA and other data. The resulting, authoritative map content will populate the Agency’s WebGIS library for use by Agency managers, decision-makers, and other interested parties.