DESCRIPTION OF ORIGINAL PARCELS DATASET HOSTED BY NJ OGIS: The statewide composite of parcels (cadastral) data for New Jersey is made available here in Web Mercator projection (3857.) It was developed during the Parcels Normalization Project in 2008-2014 by the NJ Office of Information Technology, Office of GIS (NJOGIS). The normalized parcels data are compatible with the New Jersey Department of Treasury MOD-IV system currently used by Tax Assessors and selected attributes from that system have been joined with the parcels in this dataset. Please see the NJGIN parcel dataset page for additional resources, including a downloadable zip file of the statewide data: https://njgin.nj.gov/njgin/edata/parcels/index.html#!/This composite of parcels data serves as one of New Jersey's framework GIS data sets. Stewardship and maintenance of the data will continue to be the purview of county and municipal governments, but the statewide composite will be maintained by NJOGIS.Parcel attributes were normalized to a standard structure, specified in the NJ GIS Parcel Mapping Standard, to store parcel information and provide a PIN (parcel identification number) field that can be used to match records with suitably-processed property tax data. The standard is available for viewing and download at https://njgin.state.nj.us/oit/gis/NJ_NJGINExplorer/docs/NJGIS_ParcelMappingStandardv3.2.pdf. The PIN also can be constructed from attributes available in the MOD-IV Tax List Search table (see below).This dataset includes a large number of additional attributes from matched MOD-IV records; however, not all MOD-IV records match to a parcel, for reasons explained elsewhere in this metadata record. The statewide property tax table, including all MOD-IV records, is available as a separate download "MOD-IV Tax List Search Plus Database of New Jersey." Users who need only the parcel boundaries with limited attributes may obtain those from a separate download "Parcels Composite of New Jersey ". Also available separately are countywide parcels and tables of property ownership and tax information extracted from the NJ Division of Taxation database.The polygons delineated in this dataset do not represent legal boundaries and should not be used to provide a legal determination of land ownership. Parcels are not survey data and should not be used as such. Please note that these parcel datasets are not intended for use as tax maps. They are intended to provide reasonable representations of parcel boundaries for planning and other purposes. Please see Data Quality / Process Steps for details about updates to this composite since its first publication.
MRGP NewsIf you already have an ArcGIS named user, join the MRGP Group. Doing so allows you complete the permit requirements under your organization's umbrella. As a group member you get access to the all the MRGP items without having to log-in and log-out. If you don’t have an ArcGIS member account please contact Chad McGann (MRGP Program Lead) at 802-636-7239 or your Regional Planning Commission’s Transportation Planner. April 9, 2025. Conditional logic in webform for the newly published Open Drainage Survey was not calculating properly leading to some records with "Undetermined" status and priority. Records have been rescored and survey was republished with corrective logic. Field App version not impacted.March 11, 2025. The Road Erosion Inventory Survey123 questions for Open Drainage Roads are being streamlined to make assessments faster. Coming April 1st, the survey will be changed to only ask if there is erosion depending on if the corresponding practice type is failing. This aims at using erosion as an indicator to measure the success of each of the four Open Drainage road elements to handle stormwater: crown, berm, drainage, turnout.March 29, 2023. For MRGP permitting, Lyndonville Village (GEOID 5041950) has merged with Lyndonville Town (GEOID 5000541725). 121 segments and 14 outlets have been updated to reflect the administrative change. December 8, 2023. The Open Drainage Road Inventory survey has been updated for the 2024 field season. We added and modified a few notes for clarification and corrected an issue with users submitting incomplete surveys. See FAQ section below for how to delete the old survey and download the new one. The app will notify you there's an update, and execute it, but we've experienced select-one questions with duplicate entries.November 29, 2023. The Closed Drainage Road Inventory survey has been updated for the 2024 field season. There's a new outlet status option called "Not accessible" and conditional follow-up question. This has been added to support MS4 requirements. See FAQ section below for how to delete the old survey and download the new one. The app will notify you there's an update and execute it for you but we've experienced select-one questions with duplicate entries. Reporter for MRGPThe Reporter for MRGP doesn't require you to download any apps to complete an inventory; all you need is an internet connection and web browser. The Reporter includes culverts and bridges from VTCULVERTS, town highways from Vtrans, current status for MRGP segments and outlets and second cycle progress. The Reporter is a great way to submit work completed to meet the MRGP standards. MRGP Fieldworker SolutionStep 1: Download the free mobile appsFor fieldworkers to collect and submit data to VT DEC, two free apps are required: ArcGIS Field Maps and Survey123. ArcGIS Field Maps is used first to locate the segment or outlet for inventory, and Survey123, for completing the Road Erosion Inventory.• You can download ArcGIS Fields Maps and Survey123 from the Google Play Store.• You can download ArcGIS Field Maps and Survey123 from Apple Store.Step 2: Sign into the mobile appYou will need appropriate credentials to access fieldworker solution, Please contact your Regional Planning Commission’s Transportation Planner or Chad McGann (MRGP Program Lead) at 802-636-7239.Open Field Maps, select ‘ArcGIS Online’ as shown below, and enter the user name and password. The credential is saved unless you sign out. Step 3: Open the MRGP Mobile MapIf you’re working in an area that has a reliable data connection (e.g. LTE or 4G), open the map below by selecting it.Step 4: Select a road segment or outlet for inventoryUsing your location, highlighted in red below, select the segment or outlet you need to inventory, and select 'Update Road Segment Status' from the pop-up to launch Survey123.
Step 5: Complete the Road Erosion Inventory and submit inventory to DECSelecting 'Update Road Segment Status' opens Survey123, downloads the relevant survey and pre-populates the REI with important information for reporting to DEC. You will have to enter the same username and password to access the REI forms. The credential is saved unless you sign out of Survey123.Complete the survey using the appropriate supplement below and submit the assessment directly to VT DEC.Paved Roads with Catch Basin SupplementPaved and Gravel Roads with Drainage Ditches Supplement
Step 6: Repeat!Go back to the ArcGIS Field Maps and select the next segment for inventory and repeat steps 1-5.
If you have question related to inventory protocol reach out to Chad McGann, MRGP Program Lead, at chad.mcgann@vermont.gov, 802-636-7396.If you have questions about implementing the mobile data collection piece please contact Ryan Knox, ADS-ANR IT, at ryan.knox@vermont.gov, (802) 793-0297
How do I update a survey when a new one is available?While the Survey123 app will notify you and update it for you, we've experienced some select-one questions having duplicate choices. It's a best practice to delete the old survey and download the new one. See this document for step-by-step instructions.I already have an ArcGIS member account with my organization, can I use it to complete MRGP inventories?Yes! The MRGP solution is shared within an ArcGIS Group that allows outside organizations. Click "join this group" and send an request to the ANR GIS team. This will allow you complete MRGP requirements for the REI and stay logged into your organization. Win-win situation for us both!AGOL Group: https://www.arcgis.com/home/group.html?id=027e1696b97a48c4bc50cbb931de992d#overviewThe location where I'm doing inventory does not have data coverage (LTE or 4G). What can I do?ArcGIS Field Maps allows you take map areas offline when you think there will be spotty or no data coverage. I made a video to demonstrate the steps for taking map areas offline - https://youtu.be/ScpQnenDp7wSurvey123 operates offline by default but you need to download the survey. My recommendation is to test the fieldworker solution (Steps 1-5) before you go into the field but don't submit the test survey.How do remove an offline area and create a new one? Check out this how-to document for instructions. Delete and Download Offline AreaWhere can I download the Road Erosion Scoring shown on the the Atlas? You can download the scoring for both outlets and road segments through the VT Open Geodata Portal.https://geodata.vermont.gov/search?q=mrgpHow do I use my own map for launching the official MRGP REI survey form? You can use the following custom url for launching Survey123, open the REI and prepopulate answers in the form. More information is here. TIP: add what's below directly in the HTML view of the popup not the link as described in the post I provided.
Segments (lines):Update Road Segment StatusOutlets (points):Update Outlet Status
How do I save my name and organization information used in subsequent surveys? Watch this short video or execute the steps below:
Open Survey123 and open a blank REI form (Collect button) Note: it's important to open a blank form so you don't save the same segment id for all your surveys Fill-in your 'Name' and 'Organization' and clear the 'Date of Assessment field' (x button). Using the favorites menu in the top-right corner you can use the current state of your survey to 'Set as favorite answers.' Close survey and 'Save this survey in Drafts.' Use Collector to launch survey from selected feature (segment or outlet). Using the favorites menu again, 'Paste answers from favorite.
What if the map doesn't have the outlet or road segment I need to inventory for the MRGP? Go Directly to Survey123 and complete the appropriate Road Erosion Inventory and submit the data to DEC. The survey includes a Geopoint (location) that we can use to determine where you completed the inventory.
Where can I view the Road Erosion Inventories completed with Survey123? Use the web map below to view second cycle inventories completed with Survey123. The first cycle inventories can be downloaded below. First cycle inventories are those collected 2018-2022.Web map - Completed Road Erosion Inventories for MRGPWhere can I download the 2020-2022 data collected with Survey123?Road Segments (lines) - https://anrmaps.vermont.gov/websites/MRGP/MRGP2020_segments.zipOutlets (points) - https://anrmaps.vermont.gov/websites/MRGP/MRGP2020_outlets.zipWhere can I download the 2019 data collected with Survey123?
Road Segments (lines) -
AT_2004_HOWA File Geodatabase Feature Class Thumbnail Not Available Tags Socio-economic resources, Information, Social Institutions, Hierarchy, Territory, BES, Parcel, Property, Property View, A&T, Database, Assessors, Taxation Summary Serves as a basis for performing various analyses based on parcel data. Description Assessments & Taxation (A&T) Database from MD Property View 2004 for Howard County. The A&T Database contains parcel data from the State Department of Assessments and Taxation; it incorporates parcel ownership and address information, parcel valuation information and basic information about the land and structure(s) associated with a given parcel. These data form the basis for the 2004 Database, which also includes selected Computer Assisted Mass Appraisal (CAMA) characteristics, text descriptions to make parcel code field data more readily accessible and logical True/False fields which identify parcels with certain characteristics. Documentation for A&T, including a thorough definition for all attributes is enclosed. Complete Property View documentation can be found at http://www.mdp.state.md.us/data/index.htm under the "Technical Background" tab. It should be noted that the A&T Database consists of points and not parcel boundaries. For those areas where parcel polygon data exists the A&T Database can be joined using the ACCTID or a concatenation of the BLOCK and LOT fields, whichever is appropriate. (Spaces may have to be excluded when concatenating the BLOCK and LOT fields). A cursory review of the 2004 version of the A&T Database indicates that it has more accurate data when compared with the 2003 version, particularly with respect to dwelling types. However, for a given record it is not uncommon for numerous fields to be missing attributes. Based on previous version of the A&T Database it is also not unlikely that some of the information is inaccurate. This layer was edited to remove points that did not have a valid location because they failed to geocode. There were 1160 such points. A listing of the deleted points is in the table with the suffix "DeletedRecords." Credits Maryland Department of Planning Use limitations BES use only. Extent West -77.186932 East -76.699458 North 39.373967 South 39.099693 Scale Range There is no scale range for this item.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We’ve been asked to create measures of communities that are “walkable” for several projects. While there is no standard definition of what makes a community “walkable”, and the definition of “walkability” can differ from person to person, we thought an indicator that explores the total length of available sidewalks relative to the total length of streets in a community could be a good place to start. In this blog post, we describe how we used open data from SPC and Allegheny County to create a new measure for how “walkable” a community is. We wanted to create a ratio of the length of a community’s sidewalks to the length of a community’s streets as a measure of pedestrian infrastructure. A ratio of 1 would mean that a community has an equal number of linear feet of sidewalks and streets. A ratio of about 2 would mean that a community has two linear feet of sidewalk for every linear foot of street. In other words, every street has a sidewalk on either side of it. In creating a measure of the ratio of streets to sidewalks, we had to do a little bit of data cleanup. Much of this was by trial and error, ground-truthing the data based on our personal experiences walking in different neighborhoods. Since street data was not shared as open data by many counties in our region either on PASDA or through the SPC open data portal, we limited our analysis of “walkability” to Allegheny County.
In looking at the sidewalk data table and map, we noticed that trails were included. While nice to have in the data, we wanted to exclude these two features from the ratio. We did this to avoid a situation where a community that had few sidewalks but was in the same blockgroup as a park with trails would get “credit” for being more “walkable” than it actually is according to our definition. We did this by removing all segments where “Trail” was in the “Type_Name” field.
We also used a similar tabular selection method to remove crosswalks from the sidewalk data “Type_Name”=”Crosswalk.” We kept the steps in the dataset along with the sidewalks.
In the street data obtained from Allegheny County’s GIS department, we felt like we should try to exclude limited-access highway segments from the analysis, since pedestrians are prohibited from using them, and their presence would have reduced the sidewalk/street ratio in communities where they are located. We did this by excluding street segments whose values in the “FCC” field (designating type of street) equaled “A11” or “A63.” We also removed trails from this dataset by excluding those classified as “H10.” Since documentation was sparse, we looked to see how these features were classified in the data to determine which codes to exclude.
After running the data initially, we also realized that excluding alleyways from the calculations also could improve the accuracy of our results. Some of the communities with substantial pedestrian infrastructure have alleyways, and including them would make them appear to be less-”walkable” in our indicator. We removed these from the dataset by removing records with a value of “Aly” or “Way” in the “St_Type” field. We also excluded streets where the word “Alley” appeared in the street name, or “St_Name” field.
The full methodology used for this dataset is captured in our blog post, and we have also included the sidewalk and street data used to create the ratio here as well.
The Layer Showcase allows you to display a gallery of layers within a group. This app is handy for allowing users to explore layer-based content, viewing it on a map or globe, and optionally creating a new map based on the layers that have been added to the view. The vast majority of your work should go into styling and documenting the layer items, you can then share the layers with a group to deliver content via the Layer Showcase. This is a simple and interactive way to allow users to browse a group of content and then transition from layers to maps. The side panel of the Layer Showcase opens with group description, including graphics, text, and links. When you click a layer info button, or add a layer, the side panel populates with the description from the layer item. The Layer Showcase includes a Table of Contents, visible minimized in the right corner of the map, which can be used to view the layer legend, change layer order, or remove all or selected layers. The side panel and the ribbon are retractable using the panel handles.Use CasesSimple Kiosk application to allow users to explore geographic together or individually.Provide basic self-service mapping capabilities to your audience by making it very easy to discover and add high quality layers to a map.Show off your work to the public and let them access your most relavent layers in the context of a simple viewer . Choose whether to inlcud or hide the 2D/3D toggle button and the Create Map button. Hide the side panel and/or the layer carousel when the app is opened. Configurable OptionsProvide a title and description for the application. If the group used in the app has a description, this will be used by default.Set a default basemap and choose if you will include a basemap gallery to let users of the app explore different options.Search capabilities to quickly navigate within the map. Choose a color theme or leverage the Shared Theme settings defined by your organization.Browse layers in both 2D and 3D on a globe.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsLayer Showcase requires an ArcGIS group that contains layer item types.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a group and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.
Feature class that compare the elevations between seawall crests (extracted from available LiDAR datasets from 2010 and 2013) with published FEMA Base Flood Elevations (BFEs) from preliminary FEMA DFIRMS (Panels issued in 2018 and 2019) in coastal York and Cumberland counties (up through Willard Beach in South Portland). The dataset included the development of an inventory of coastal armor structures from a range of different datasets. Feature classes include the following:Steps to create the dataset included:Shoreline structures from the most recent NOAA EVI LANDWARD_SHORETYPE feature class were extracted using the boundaries of York and Cumberland counties. This included 1B: Exposed, Solid Man-Made structures, 8B: Sheltered, Solid Man-Made Structures; 6B: Riprap, and 8C: Sheltered Riprap. This resulted in the creation of Cumberland_ESIL_Structures and York_ESIL_Structures. Note that ESIL uses the MHW line as the feature base.Shoreline structures from the work by Rice (2015) were extracted using the York and Cumberland county boundaries. This resulted in the creation of Cumberland_Rice_Structures and York_Rice_Structures.Additional feature classes for structures were created for York and Cumberland county structures that were missed. This was Slovinsky_York_Structures and Slovinsky_Cumberland_Structures. GoogleEarth imagery was inspected while additional structures were being added to the GIS. 2012 York and Cumberland County imagery was used as the basemap, and structures were classified as bulkheads, rip rap, or dunes (if known). Also, whether or not the structure was in contact with the 2015 HAT was noted.MEDEP was consulted to determine which permit data (both PBR and Individual Permit, IP, data) could be used to help determine where shoreline stabilization projects may have been conducted adjacent to or on coastal bluffs. A file was received for IP data and brought into GIS (DEP_Licensing_Points). This is a point file for shoreline stabilization permits under NRPA.Clip GISVIEW.MEDEP.Permit_By_Rule_Locations to the boundaries of the study area and output DEP_PBR_Points.Join GISVIEW.sde>GISVIEW.MEDEP.PBR_ACTIVITY to the DEP_PBR_Points using the PBR_ID Field. Then, export this file as DEP_PBR_Points2. Using the new ACTIVITY_DESC field, select only those activities that relate to shoreline stabilization projects:PBR_ACTIVITY ACTIVITY_DESC02 Act. Adjacent to a Protected Natural Resource04 Maint Repair & Replacement of Structure08 Shoreline StabilizationSelect by Attributes > PBR_ACTIVITY IN (‘02’, ‘04’, ‘08’) select only those activities likely to be related to shoreline stabilization, and export the selected data as a DEP_PBR_Points3. Then delete 1 and 2, and rename this final product as DEP_PBR_Points.Next, visually inspect the Licensing and PBR files using ArcMap 2012, 2013 imagery, along with Google Earth imagery to determine the extents of armoring along the shoreline.Using EVI and Rice data as indicators, manually inspect and digitize sections of the coastline that are armored. Classify the seaward shoreline type (beach, mudflat, channel, dune, etc.) and the armor type (wall or bulkhead). Bring in the HAT line and, using that and visual indicators, identify whether or not the armored sections are in contact with HAT. Use Google Earth at the same time as digitizing in order to help constrain areas. Merge digitized armoring into Cumberland_York_Merged.Bring the preliminary FEMA DFIRM data in and use “intersect” to assign the different flood zones and elevations to the digitized armored sections. This was done first for Cumberland, then for York Counties. Delete ancillary attributes, as needed. Resulting layer is Cumberland_Structure_FloodZones and York_Structure_FloodZones.Go to NOAA Digital Coast Data Layers and download newest LiDAR data for York and Cumberland county beach, dune, and just inland areas. This includes 2006 and newer topobathy data available from 2010 (entire coast), and selected areas from 2013 and 2014 (Wells, Scarborough, Kennebunk).Mosaic the 2006, 2010, 2013 and 2014 data (with 2013 and 2014 being the first dataset laying on top of the 2010 data) Mosaic this dataset into the sacobaydem_ftNAVD raster (this is from the MEGIS bare-earth model). This will cover almost all of the study area except for armor along several areas in York. Resulting in LidAR206_2010_2013_Mosaic.tif.Using the LiDAR data as a proxy, create a “seaward crest” line feature class which follows along the coast and extracts the approximate highest point (cliff, bank, dune) along the shoreline. This will be used to extract LiDAR data and compare with preliminary flood zone information. The line is called Dune_Crest.Using an added tool Points Along Line, create points at 5 m spacing along each of the armored shoreline feature lines and the dune crest lines. Call the outputs PointsonLines and PointsonDunes.Using Spatial Analyst, Extract LIDAR elevations to the points using the 2006_2010_2013 Mosaic first. Call this LidarPointsonLines1. Select those points which have NULL values, export as this LiDARPointsonLines2. Then rerun Extract Values to Points using just the selected data and the state MEGIS DEM. Convert RASTERVALU to feet by multiplying by 3.2808 (and rename as Elev_ft). Select by Attributes, find all NULL values, and in an edit session, delete them from LiDARPointsonLines. Then, merge the 2 datasets and call it LidarPointsonLines. Do the same above with dune lines and create LidarPointsonDunes.Next, use the Cumberland and York flood zone layers to intersect the points with the appropriate flood zone data. Create ….CumbFIRM and …YorkFIRM files for the dunes and lines.Select those points from the Dunes feature class that are within the X zone – these will NOT have an associated BFE for comparison with the Lidar data. Export the Dune Points as Cumberland_York_Dunes_XZone. Run NEAR and use the merged flood zone feature class (with only V, AE, and AO zones selected). Then, join the flood zone data to the feature class using FID (from the feature class) and OBJECTID (from the flood zone feature class). Export as Cumberland_York_Dunes_XZone_Flood. Delete ancillary columns of data, leaving the original FLD_ZONE (X), Elev_ft, NEAR_DIST (distance, in m, to the nearest flood zone), FLD_ZONE_1 (the near flood zone), and the STATIC_BFE_1 (the nearest static BFE).Do the same as above, except with the Structures file (Cumberland_York_Structures_Lidar_DFIRM_Merged), but also select those features that are within the X zone and the OPEN WATER. Export the points as Cumberland_York_Structures_XZone. Again, run the NEAR using the merged flood zone and only AE, VE, and AO zones selected. Export the file as Cumberland_York_Structures_XZone_Flood.Merge the above feature classes with the original feature classes. Add a field BFE_ELEV_COMPARE. Select all those features whose attributes have a VE or AE flood zone and use field calculator to calculate the difference between the Elev_ft and the BFE (subtracting the STATIC_BFE from Elev_ft). Positive values mean the maximum wall value is higher than the BFE, while negative values mean the max is below the BFE. Then, select the remaining values with switch selection. Calculate the same value but use the NEAR_STATIC_BFE value instead. Select by Attributes>FLD_ZONE=AO, and use the DEPTH value to enter into the above created fields as negative values. Delete ancilary attribute fields, leaving those listed in the _FINAL feature classes described above the process steps section.
This dataset is a modified version of the FWS developed data depicting “Highly Important Landscapes”, as outlined in Memorandum FWS/AES/058711 and provided to the Wildlife Habitat Spatial analysis Lab on October 29th 2014. Other names and acronyms used to refer to this dataset have included: Areas of Significance (AoSs - name of GIS data set provided by FWS), Strongholds (FWS), and Sagebrush Focal Areas (SFAs - BLM). The BLM will refer to these data as Sagebrush Focal Areas (SFAs). Data were provided as a series of ArcGIS map packages which, when extracted, contained several datasets each. Based on the recommendation of the FWS Geographer/Ecologist (email communication, see data originator for contact information) the dataset called “Outiline_AreasofSignificance” was utilized as the source for subsequent analysis and refinement. Metadata was not provided by the FWS for this dataset. For detailed information regarding the dataset’s creation refer to Memorandum FWS/AES/058711 or contact the FWS directly. Several operations and modifications were made to this source data, as outlined in the “Description” and “Process Step” sections of this metadata file. Generally: The source data was named by the Wildlife Habitat Spatial Analysis Lab to identify polygons as described (but not identified in the GIS) in the FWS memorandum. The Nevada/California EIS modified portions within their decision space in concert with local FWS personnel and provided the modified data back to the Wildlife Habitat Spatial Analysis Lab. Gaps around Nevada State borders, introduced by the NVCA edits, were then closed as was a large gap between the southern Idaho & southeast Oregon present in the original dataset. Features with an area below 40 acres were then identified and, based on FWS guidance, either removed or retained. Finally, guidance from BLM WO resulted in the removal of additional areas, primarily non-habitat with BLM surface or subsurface management authority. Data were then provided to each EIS for use in FEIS development. Based on guidance from WO, SFAs were to be limited to BLM decision space (surface/sub-surface management areas) within PHMA. Each EIS was asked to provide the limited SFA dataset back to the National Operations Center to ensure consistent representation and analysis. Returned SFA data, modified by each individual EIS, was then consolidated at the BLM’s National Operations Center retaining the three standardized fields contained in this dataset.Several Modifications from the original FWS dataset have been made. Below is a summary of each modification.1. The data as received from FWS: 16,514,163 acres & 1 record.2. Edited to name SFAs by Wildlife Habitat Spatial Analysis Lab:Upon receipt of the “Outiline_AreasofSignificance” dataset from the FWS, a copy was made and the one existing & unnamed record was exploded in an edit session within ArcMap. A text field, “AoS_Name”, was added. Using the maps provided with Memorandum FWS/AES/058711, polygons were manually selected and the “AoS_Name” field was calculated to match the names as illustrated. Once all polygons in the exploded dataset were appropriately named, the dataset was dissolved, resulting in one record representing each of the seven SFAs identified in the memorandum.3. The NVCA EIS made modifications in concert with local FWS staff. Metadata and detailed change descriptions were not returned with the modified data. Contact Leisa Wesch, GIS Specialist, BLM Nevada State Office, 775-861-6421, lwesch@blm.gov, for details.4. Once the data was returned to the Wildlife Habitat Spatial Analysis Lab from the NVCA EIS, gaps surrounding the State of NV were closed. These gaps were introduced by the NVCA edits, exacerbated by them, or existed in the data as provided by the FWS. The gap closing was performed in an edit session by either extending each polygon towards each other or by creating a new polygon, which covered the gap, and merging it with the existing features. In addition to the gaps around state boundaries, a large area between the S. Idaho and S.E. Oregon SFAs was filled in. To accomplish this, ADPP habitat (current as of January 2015) and BLM GSSP SMA data were used to create a new polygon representing PHMA and BLM management that connected the two existing SFAs.5. In an effort to simplify the FWS dataset, features whose areas were less than 40 acres were identified and FWS was consulted for guidance on possible removal. To do so, features from #4 above were exploded once again in an ArcMap edit session. Features whose areas were less than forty acres were selected and exported (770 total features). This dataset was provided to the FWS and then returned with specific guidance on inclusion/exclusion via email by Lara Juliusson (lara_juliusson@fws.gov). The specific guidance was:a. Remove all features whose area is less than 10 acresb. Remove features identified as slivers (the thinness ratio was calculated and slivers identified by Lara Juliusson according to https://tereshenkov.wordpress.com/2014/04/08/fighting-sliver-polygons-in-arcgis-thinness-ratio/) and whose area was less than 20 acres.c. Remove features with areas less than 20 acres NOT identified as slivers and NOT adjacent to other features.d. Keep the remainder of features identified as less than 40 acres.To accomplish “a” and “b”, above, a simple selection was applied to the dataset representing features less than 40 acres. The select by location tool was used, set to select identical, to select these features from the dataset created in step 4 above. The records count was confirmed as matching between the two data sets and then these features were deleted. To accomplish “c” above, a field (“AdjacentSH”, added by FWS but not calculated) was calculated to identify features touching or intersecting other features. A series of selections was used: first to select records 6. Based on direction from the BLM Washington Office, the portion of the Upper Missouri River Breaks National Monument (UMRBNM) that was included in the FWS SFA dataset was removed. The BLM NOC GSSP NLCS dataset was used to erase these areas from #5 above. Resulting sliver polygons were also removed and geometry was repaired.7. In addition to removing UMRBNM, the BLM Washington Office also directed the removal of Non-ADPP habitat within the SFAs, on BLM managed lands, falling outside of Designated Wilderness’ & Wilderness Study Areas. An exception was the retention of the Donkey Hills ACEC and adjacent BLM lands. The BLM NOC GSSP NLCS datasets were used in conjunction with a dataset containing all ADPP habitat, BLM SMA and BLM sub-surface management unioned into one file to identify and delete these areas.8. The resulting dataset, after steps 2 – 8 above were completed, was dissolved to the SFA name field yielding this feature class with one record per SFA area.9. Data were provided to each EIS for use in FEIS allocation decision data development.10. Data were subset to BLM decision space (surface/sub-surface) within PHMA by each EIS and returned to the NOC.11. Due to variations in field names and values, three standardized fields were created and calculated by the NOC:a. SFA Name – The name of the SFA.b. Subsurface – Binary “Yes” or “No” to indicated federal subsurface estate.c. SMA – Represents BLM, USFS, other federal and non-federal surface management 12. The consolidated data (with standardized field names and values) were dissolved on the three fields illustrated above and geometry was repaired, resulting in this dataset.
Web App. Use the tabs provided to discover information about map features and capabilities. Link to Metadata. A variety of searches can be performed to find the parcel of interest. Use the Query Tool to build searches. Click Apply button at the bottom of the tool.Query by Name (Last First) (e.g. Bond James)Query by Address (e.g. 41 S Central)Query by Locator number (e.g. 21J411046)Search results will be listed under the Results tab. Click on a parcel in the list to zoom to that parcel. Click on the parcel in the map and scroll through the pop-up to see more information about the parcel. Click the ellipse in the Results tab or in the pop-up to view information in a table. Attribute information can be exported to CSV file. Build a custom Filter to select and map properties by opening the Parcels attribute table:1. Click the arrow tab at the bottom middle of the map to expand the attribute table window2. Click on the Parcels tab3. Check off Filter by map extent4. Open Options>Filter5. Build expressions as needed to filter by owner name or other variables6. Select the needed records from the returned list7. Click Zoom to which will zoom to the selected recordsPlease note that as the map zooms out detailed layers, such as the parcel boundaries will not display.In addition to Search capabilities, the following tools are provided:MeasureThe measure tool provides the capabilities to draw a point, line, or polygon on the map and specify the unit of measurement.DrawThe draw tool provides the capabilities to draw a point, line, or polygon on the map as graphics. PrintThe print tool exports the map to either a PDF or image file. Click Settings button to configure map or remove legend.Map navigation using mouse and keyboard:Drag to panSHIFT + CTRL + Drag to zoom outMouse Scroll Forward to zoom inMouse Scroll Backward to zoom outUse Arrow keys to pan+ key to zoom in a level- key to zoom out a levelDouble Click to Zoom inFAQsHow to select a parcel: Click on a parcel in the map, or use Query Tool to search for parcel by owner, address or parcel id.How to select more than one parcel: Go to Select Tool and choose options on Select button.How to clear selected parcel(s): Go to Select Tool and click Clear.
AT_2004_BACO File Geodatabase Feature Class Thumbnail Not Available Tags Socio-economic resources, Information, Social Institutions, Hierarchy, Territory, BES, Parcel, Property, Property View, A&T, Database, Assessors, Taxation Summary Serves as a basis for performing various analyses based on parcel data. Description Assessments & Taxation (A&T) Database from MD Property View 2004 for Baltimore County. The A&T Database contains parcel data from the State Department of Assessments and Taxation; it incorporates parcel ownership and address information, parcel valuation information and basic information about the land and structure(s) associated with a given parcel. These data form the basis for the 2004 Database, which also includes selected Computer Assisted Mass Appraisal (CAMA) characteristics, text descriptions to make parcel code field data more readily accessible and logical True/False fields which identify parcels with certain characteristics. Documentation for A&T, including a thorough definition for all attributes is enclosed. Complete Property View documentation can be found at http://www.mdp.state.md.us/data/index.htm under the "Technical Background" tab. It should be noted that the A&T Database consists of points and not parcel boundaries. For those areas where parcel polygon data exists the A&T Database can be joined using the ACCTID or a concatenation of the BLOCK and LOT fields, whichever is appropriate. (Spaces may have to be excluded when concatenating the BLOCK and LOT fields). A cursory review of the 2004 version of the A&T Database indicates that it has more accurate data when compared with the 2003 version, particularly with respect to dwelling types. However, for a given record it is not uncommon for numerous fields to be missing attributes. Based on previous version of the A&T Database it is also not unlikely that some of the information is inaccurate. This layer was edited to remove points that did not have a valid location because they failed to geocode. There were 5870 such points. A listing of the deleted points is in the table with the suffix "DeletedRecords." Credits Maryland Department of Planning Use limitations BES use only. Extent West -76.897802 East -76.335214 North 39.726520 South 39.192552 Scale Range There is no scale range for this item.
AT_2004_ANNE File Geodatabase Feature Class Thumbnail Not Available Tags Socio-economic resources, Information, Social Institutions, Hierarchy, Territory, BES, Parcel, Property, Property View, A&T, Database, Assessors, Taxation Summary Serves as a basis for performing various analyses based on parcel data. Description Assessments & Taxation (A&T) Database from MD Property View 2004 for Anne Arundel County. The A&T Database contains parcel data from the State Department of Assessments and Taxation; it incorporates parcel ownership and address information, parcel valuation information and basic information about the land and structure(s) associated with a given parcel. These data form the basis for the 2004 Database, which also includes selected Computer Assisted Mass Appraisal (CAMA) characteristics, text descriptions to make parcel code field data more readily accessible and logical True/False fields which identify parcels with certain characteristics. Documentation for A&T, including a thorough definition for all attributes is enclosed. Complete Property View documentation can be found at http://www.mdp.state.md.us/data/index.htm under the "Technical Background" tab. It should be noted that the A&T Database consists of points and not parcel boundaries. For those areas where parcel polygon data exists the A&T Database can be joined using the ACCTID or a concatenation of the BLOCK and LOT fields, whichever is appropriate. (Spaces may have to be excluded when concatenating the BLOCK and LOT fields). A cursory review of the 2004 version of the A&T Database indicates that it has more accurate data when compared with the 2003 version, particularly with respect to dwelling types. However, for a given record it is not uncommon for numerous fields to be missing attributes. Based on previous version of the A&T Database it is also not unlikely that some of the information is inaccurate. This layer was edited to remove points that did not have a valid location because they failed to geocode. There were 897 such points. A listing of the deleted points is in the table with the suffix "DeletedRecords." Credits Maryland Department of Planning Use limitations BES use only. Extent West -76.838738 East -76.395283 North 39.238726 South 38.708588 Scale Range There is no scale range for this item.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Riparian corridors are important areas that maintain connectivity throughout the state of California. The riparian corridors complement the northern Sierra Nevada foothills wildlife connectivity project linkages to further achieve connectivity in the study area. We identified 280 riparian corridors represented by 232 named creeks, 43 named rivers, and 5 sloughs, forks or runs. The major corridors are the Sacramento, San Joaquin, Pit, Tuolumne, Merced, Feather and Stanislaus rivers. The 280 riparian corridors connect 201 landscape blocks. The riparian corridors complement the focal species linkages by providing many east-west corridors while the majority of linkages have a north-south orientation. Also by following the entire passage of the riparian area, these corridors run through many of the landscape blocks across the study area, helping to provide connectivity outside of habitat patch areas.We identified riparian corridors by selected streams, rivers and creeks from the NHD (National Hydrography Dataset) for state of California. From the NHD dataset, features named ‘StreamRiver’ were extracted from the ‘NHDFlowline’ vector dataset. A code 46006 was then used to extract perennial rivers and streams from the ‘StreamRiver’ dataset. However, this step resulted in a stream and river layer with many small segments. In order to reduce the number of segments and identify complete stream/river lines, we intersected the perennial rivers and streams layer with the CDFW statewide streams layer (‘CA_Streams_Statewide’) using the ‘Select by Location’ tool in ArcMap (‘CA_Streams_Statewide’ layer as target layer and the streams and rivers layer we extracted from NHD as a target layer). Second, we extracted features named ‘ArtificialPath’ from the ‘NHDFlowline’ vector dataset. Artificial paths represent the flow of water into, through, and out of features delineated using area; for example, rivers wide enough to be delineated as a polygon are represented by an artificial path flowline at their center line. Therefore, large rivers are often coded as “artificial path” in the NHD dataset. We then selected only those artificial paths with Geographic Names Information System (GNIS) names, with the assumption that artificial path features without names are “very minor streams, only of use to hydrologist” (http://nhd.usgs.gov). Next we used the same method we implemented for streams and rivers in order to remove small segments and have complete lines. The artificial path dataset is not coded to discriminate between perennial and intermittent ones similar to stream and river features. As a result, artificial paths that intersected with perennial streams and rivers were selected to represent permanent waterways. Then, the perennial stream and river layer and the artificial paths layer were merged into one dataset. After the merge we added a 500 m buffer to each side of the riparian area.We compared this merged stream/river layer with riparian vegetation classification data as a cross check. The riparian vegetation classification data are from the 2011 Northern Sierra Nevada Foothills and 2013 Eastern Central Valley fine-scale vegetation maps developed by the Vegetation Classification and Mapping Program (VegCamp) at the California Department of Fish and Wildlife. For areas outside the foothills and eastern central valley we used land cover data compiled by California Department of Forestry and Fire Protection (CDF) Fire and Resource Assessment Program (FRAP) in 2006, representing data for the period between 1997 and 2002. The resulting perennial dataset was then merged with the wetland and riparian datasets to represent perennial water sources in California. For more information see the project report at [https://nrm.dfg.ca.gov/FileHandler.ashx?DocumentID=85358].
AT_2004_CARR File Geodatabase Feature Class Thumbnail Not Available Tags Socio-economic resources, Information, Social Institutions, Hierarchy, Territory, BES, Parcel, Property, Property View, A&T, Database, Assessors, Taxation Summary Serves as a basis for performing various analyses based on parcel data. Description Assessments & Taxation (A&T) Database from MD Property View 2004 for Carroll County. The A&T Database contains parcel data from the State Department of Assessments and Taxation; it incorporates parcel ownership and address information, parcel valuation information and basic information about the land and structure(s) associated with a given parcel. These data form the basis for the 2004 Database, which also includes selected Computer Assisted Mass Appraisal (CAMA) characteristics, text descriptions to make parcel code field data more readily accessible and logical True/False fields which identify parcels with certain characteristics. Documentation for A&T, including a thorough definition for all attributes is enclosed. Complete Property View documentation can be found at http://www.mdp.state.md.us/data/index.htm under the "Technical Background" tab. It should be noted that the A&T Database consists of points and not parcel boundaries. For those areas where parcel polygon data exists the A&T Database can be joined using the ACCTID or a concatenation of the BLOCK and LOT fields, whichever is appropriate. (Spaces may have to be excluded when concatenating the BLOCK and LOT fields). A cursory review of the 2004 version of the A&T Database indicates that it has more accurate data when compared with the 2003 version, particularly with respect to dwelling types. However, for a given record it is not uncommon for numerous fields to be missing attributes. Based on previous version of the A&T Database it is also not unlikely that some of the information is inaccurate. This layer was edited to remove points that did not have a valid location because they failed to geocode. There were 848 such points. A listing of the deleted points is in the table with the suffix "DeletedRecords." Credits Maryland Department of Planning Use limitations BES use only. Extent West -77.306843 East -76.779275 North 39.727017 South 39.342858 Scale Range There is no scale range for this item.
Location of plots of stands classified for the production of seeds and seedlings in Aquitaine. The parcels identified concern only Maritime Pin (PPA: Pinus Pinaster Ait) on the departments of gironde and moorland. These plots are spotted by field officers and are ranked after passing one of the 2 CTPS (Permanent Technical Committee on the Selection of Cultivated Plants).
WMS and WMS addresses: Warnings — Please delete spaces that might appear when copying/pasting the address in the GIS software — Problems displaying multi-polygons via the use of WFS (under resolution) — favour downloading data if there are multi-polygons — WFS display of more than 500 objects via WFS impossible at the moment
WMS address for integration into a GIS from Geoide_Carto: http://data.geo-ide.application.developpement-durable.gouv.fr/WMS/228/PeuplClassesGrainesPlantsSHP_2015?
WFS address for integration into a GIS: http://ogc.geo-ide.developpement-durable.gouv.fr/cartes/mapserv?map=/opt/data/carto/geoide-catalogue/REG072A/JDD.www.map
Feature class that compares the elevations between sand dune crests (extracted from available LiDAR datasets from 2010 and 2013) with published FEMA Base Flood Elevations (BFEs) from preliminary FEMA DFIRMS (Panels issued in 2018 and 2019) in coastal York and Cumberland counties (up through Willard Beach in South Portland). Steps to create the dataset included:Shoreline structures from the most recent NOAA EVI LANDWARD_SHORETYPE feature class were extracted using the boundaries of York and Cumberland counties. This included 1B: Exposed, Solid Man-Made structures, 8B: Sheltered, Solid Man-Made Structures; 6B: Riprap, and 8C: Sheltered Riprap. This resulted in the creation of Cumberland_ESIL_Structures and York_ESIL_Structures. Note that ESIL uses the MHW line as the feature base.Shoreline structures from the work by Rice (2015) were extracted using the York and Cumberland county boundaries. This resulted in the creation of Cumberland_Rice_Structures and York_Rice_Structures.Additional feature classes for structures were created for York and Cumberland county structures that were missed. This was Slovinsky_York_Structures and Slovinsky_Cumberland_Structures. GoogleEarth imagery was inspected while additional structures were being added to the GIS. 2012 York and Cumberland County imagery was used as the basemap, and structures were classified as bulkheads, rip rap, or dunes (if known). Also, whether or not the structure was in contact with the 2015 HAT was noted.MEDEP was consulted to determine which permit data (both PBR and Individual Permit, IP, data) could be used to help determine where shoreline stabilization projects may have been conducted adjacent to or on coastal bluffs. A file was received for IP data and brought into GIS (DEP_Licensing_Points). This is a point file for shoreline stabilization permits under NRPA.Clip GISVIEW.MEDEP.Permit_By_Rule_Locations to the boundaries of the study area and output DEP_PBR_Points.Join GISVIEW.sde>GISVIEW.MEDEP.PBR_ACTIVITY to the DEP_PBR_Points using the PBR_ID Field. Then, export this file as DEP_PBR_Points2. Using the new ACTIVITY_DESC field, select only those activities that relate to shoreline stabilization projects:PBR_ACTIVITY ACTIVITY_DESC02 Act. Adjacent to a Protected Natural Resource04 Maint Repair & Replacement of Structure08 Shoreline StabilizationSelect by Attributes > PBR_ACTIVITY IN (‘02’, ‘04’, ‘08’) select only those activities likely to be related to shoreline stabilization, and export the selected data as a DEP_PBR_Points3. Then delete 1 and 2, and rename this final product as DEP_PBR_Points.Next, visually inspect the Licensing and PBR files using ArcMap 2012, 2013 imagery, along with Google Earth imagery to determine the extents of armoring along the shoreline.Using EVI and Rice data as indicators, manually inspect and digitize sections of the coastline that are armored. Classify the seaward shoreline type (beach, mudflat, channel, dune, etc.) and the armor type (wall or bulkhead). Bring in the HAT line and, using that and visual indicators, identify whether or not the armored sections are in contact with HAT. Use Google Earth at the same time as digitizing in order to help constrain areas. Merge digitized armoring into Cumberland_York_Merged.Bring the preliminary FEMA DFIRM data in and use “intersect” to assign the different flood zones and elevations to the digitized armored sections. This was done first for Cumberland, then for York Counties. Delete ancillary attributes, as needed. Resulting layer is Cumberland_Structure_FloodZones and York_Structure_FloodZones.Go to NOAA Digital Coast Data Layers and download newest LiDAR data for York and Cumberland county beach, dune, and just inland areas. This includes 2006 and newer topobathy data available from 2010 (entire coast), and selected areas from 2013 and 2014 (Wells, Scarborough, Kennebunk).Mosaic the 2006, 2010, 2013 and 2014 data (with 2013 and 2014 being the first dataset laying on top of the 2010 data) Mosaic this dataset into the sacobaydem_ftNAVD raster (this is from the MEGIS bare-earth model). This will cover almost all of the study area except for armor along several areas in York. Resulting in LidAR206_2010_2013_Mosaic.tif.Using the LiDAR data as a proxy, create a “seaward crest” line feature class which follows along the coast and extracts the approximate highest point (cliff, bank, dune) along the shoreline. This will be used to extract LiDAR data and compare with preliminary flood zone information. The line is called Dune_Crest.Using an added tool Points Along Line, create points at 5 m spacing along each of the armored shoreline feature lines and the dune crest lines. Call the outputs PointsonLines and PointsonDunes.Using Spatial Analyst, Extract LIDAR elevations to the points using the 2006_2010_2013 Mosaic first. Call this LidarPointsonLines1. Select those points which have NULL values, export as this LiDARPointsonLines2. Then rerun Extract Values to Points using just the selected data and the state MEGIS DEM. Convert RASTERVALU to feet by multiplying by 3.2808 (and rename as Elev_ft). Select by Attributes, find all NULL values, and in an edit session, delete them from LiDARPointsonLines. Then, merge the 2 datasets and call it LidarPointsonLines. Do the same above with dune lines and create LidarPointsonDunes.Next, use the Cumberland and York flood zone layers to intersect the points with the appropriate flood zone data. Create ….CumbFIRM and …YorkFIRM files for the dunes and lines.Select those points from the Dunes feature class that are within the X zone – these will NOT have an associated BFE for comparison with the Lidar data. Export the Dune Points as Cumberland_York_Dunes_XZone. Run NEAR and use the merged flood zone feature class (with only V, AE, and AO zones selected). Then, join the flood zone data to the feature class using FID (from the feature class) and OBJECTID (from the flood zone feature class). Export as Cumberland_York_Dunes_XZone_Flood. Delete ancillary columns of data, leaving the original FLD_ZONE (X), Elev_ft, NEAR_DIST (distance, in m, to the nearest flood zone), FLD_ZONE_1 (the near flood zone), and the STATIC_BFE_1 (the nearest static BFE).Do the same as above, except with the Structures file (Cumberland_York_Structures_Lidar_DFIRM_Merged), but also select those features that are within the X zone and the OPEN WATER. Export the points as Cumberland_York_Structures_XZone. Again, run the NEAR using the merged flood zone and only AE, VE, and AO zones selected. Export the file as Cumberland_York_Structures_XZone_Flood.Merge the above feature classes with the original feature classes. Add a field BFE_ELEV_COMPARE. Select all those features whose attributes have a VE or AE flood zone and use field calculator to calculate the difference between the Elev_ft and the BFE (subtracting the STATIC_BFE from Elev_ft). Positive values mean the maximum wall value is higher than the BFE, while negative values mean the max is below the BFE. Then, select the remaining values with switch selection. Calculate the same value but use the NEAR_STATIC_BFE value instead. Select by Attributes>FLD_ZONE=AO, and use the DEPTH value to enter into the above created fields as negative values. Delete ancilary attribute fields, leaving those listed in the _FINAL feature classes described above the process steps section.
Do not share this map Publicly!This template is for ACTIVE INCIDENTS only. For training, please use the Training template (found here). This workflow uses one template web map and contains all layers of the National Incident Feature Service in a single service (Unlike the standard template which splits features into Edit, View, and Repair services). It is for teams looking for a simple approach to ArcGIS Online implementation. All features are visible; editing is enabled for points, lines, and polygons and disabled for the IR layers [Workflow LINK]; contains the National Incident Feature Service layers: NWCG approved Event schema.This template web map is provided for quick deployment. Listed next are the steps to implement this Standard Workflow:1) Open this web map template in Map Viewer2) Do a Save As (Click Save and select Save As)3) Zoom to your fire area and add bookmarks4) Look for a red triangle polygon with your fire's attributes - do either of these: a. Use this polygon as a start for your incident and modify as needed b. Copy the attributes (most importantly, the IRWIN ID) into a new polygon and delete the triangle (delete in ArcMap or Pro)5) Create a display filter on features to only show features related to your incident (Optional).6) Create a new Photo Point Layer (Content > Create > Feature Layer > From Existing > #TEMPLATE - PhotoPoint). Add this to your web map and remove default PhotoPoint Layer7) Share with your Mobile Editing group8) Add necessary incident personnel to the Mobile Editing group9) Make available for Viewers:a. Save out a second version of this map and disable editing on all the layers except Photo Points.b. Share this version with the Viewing group.10) To track and manage suppression repair needs use the Suppression Repair Add-on
Purpose: The purpose of these data was to provide an inventory of trails for analysis of the supply of trail related recreational resources to develop the 2022 Statewide Comprehensive Outdoor Recreation Plan. These data are of various qualities and level of completeness. Organizations that maintain trails data were contacted between February and November of 2022 to gather these data from open data sites or internal data sources. Methodology:Each trail was gathered, inventoried, reviewed then processed using a geoprocessing model which ran through the following steps: 1) Data was transformed into Alaska Albers Projection.2) Data was queried to remove trails that were outside of the State of Alaska.3) Each layer was reviewed for duplication across datasets and where possible duplicates were queried out from the organization that was a secondary source. With the limitations of this project not all duplicates were able to be found and removed a more though or review outside of the scope of the SCORP project would be appropriate for improving these data.4) Attribute fields were standardize to the SCORP data model. A SCORP legend field for trails assigns each trail to one of the following types Water, Winter, Terra or RS 2477. Originally the source data had 33 original trail types. The source organization and URL link was added to the data to get easy access to the original dataset. Regional and organization fields were also added to support developing reports and dashboards.5) Data was appended to the SCORP trails layer for mapping and analysis.Notes:Selected trail systems were buffered and overlaid with park information then compared to cell phone mobility data to analyze use of individual trails. These trail systems included the Caribou Hills Snowmobile Trails, Petersville Area Snowmobile Trails, Chugach State Park, Denali State Park, and a few other areas.
These data include the individual responses for the City of Tempe Annual Business Survey conducted by ETC Institute. These data help determine priorities for the community as part of the City's on-going strategic planning process. Averaged Business Survey results are used as indicators for city performance measures. The performance measures with indicators from the Business Survey include the following (as of 2023):1. Financial Stability and Vitality5.01 Quality of Business ServicesThe location data in this dataset is generalized to the block level to protect privacy. This means that only the first two digits of an address are used to map the location. When they data are shared with the city only the latitude/longitude of the block level address points are provided. This results in points that overlap. In order to better visualize the data, overlapping points were randomly dispersed to remove overlap. The result of these two adjustments ensure that they are not related to a specific address, but are still close enough to allow insights about service delivery in different areas of the city.Additional InformationSource: Business SurveyContact (author): Adam SamuelsContact E-Mail (author): Adam_Samuels@tempe.govContact (maintainer): Contact E-Mail (maintainer): Data Source Type: Excel tablePreparation Method: Data received from vendor after report is completedPublish Frequency: AnnualPublish Method: ManualData DictionaryMethods:The survey is mailed to a random sample of businesses in the City of Tempe. Follow up emails and texts are also sent to encourage participation. A link to the survey is provided with each communication. To prevent people who do not live in Tempe or who were not selected as part of the random sample from completing the survey, everyone who completed the survey was required to provide their address. These addresses were then matched to those used for the random representative sample. If the respondent’s address did not match, the response was not used.To better understand how services are being delivered across the city, individual results were mapped to determine overall distribution across the city.Processing and Limitations:The location data in this dataset is generalized to the block level to protect privacy. This means that only the first two digits of an address are used to map the location. When they data are shared with the city only the latitude/longitude of the block level address points are provided. This results in points that overlap. In order to better visualize the data, overlapping points were randomly dispersed to remove overlap. The result of these two adjustments ensure that they are not related to a specific address, but are still close enough to allow insights about service delivery in different areas of the city.The data are used by the ETC Institute in the final published PDF report.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
This resource contains raster datasets created using ArcGIS to analyse groundwater levels in the Namoi subregion.
These data layers were created in ArcGIS as part of the analysis to investigate surface water - groundwater connectivity in the Namoi subregion. The data layers provide several of the figures presented in the Namoi 2.1.5 Surface water - groundwater interactions report.
Extracted points inside Namoi subregion boundary. Converted bore and pipe values to Hydrocode format, changed heading of 'Value' column to 'Waterlevel' and removed unnecessary columns then joined to Updated_NSW_GroundWaterLevel_data_analysis_v01\NGIS_NSW_Bore_Join_Hydmeas_unique_bores.shp clipped to only include those bores within the Namoi subregion.
Selected only those bores with sample dates between >=26/4/2012 and <31/7/2012. Then removed 4 gauges due to anomalous ref_pt_height values or WaterElev values higher than Land_Elev values.
Then added new columns of calculations:
WaterElev = TsRefElev - Water_Leve
DepthWater = WaterElev - Ref_pt_height
Ref_pt_height = TsRefElev - LandElev
Alternatively - Selected only those bores with sample dates between >=1/5/2006 and <1/7/2006
2012_Wat_Elev - This raster was created by interpolating Water_Elev field points from HydmeasJune2012_only.shp, using Spatial Analyst - Topo to Raster tool. And using the alluvium boundary (NAM_113_Aquifer1_NamoiAlluviums.shp) as a boundary input source.
12_dw_olp_enf - Select out only those bores that are in both source files.
Then using depthwater in Topo to Raster, with alluvium as the boundary, ENFORCE field chosen, and using only those bores present in 2012 and 2006 dataset.
2012dw1km_alu - Clipped the 'watercourselines' layer to the Namoi Subregion, then selected 'Major' water courses only. Then used the Geoprocessing 'Buffer' tool to create a polygon delineating an area 1km around all the major streams in the Namoi subregion.
selected points from HydmeasJune2012_only.shp that were within 1km of features the WatercourseLines then used the selected points and the 1km buffer around the major water courses and the Topo to Raster tool in Spatial analyst to create the raster.
Then used the alluvium boundary to truncate the raster, to limit to the area of interest.
12_minus_06 - Select out bores from the 2006 dataset that are also in the 2012 dataset. Then create a raster using depth_water in topo to raster, with ENFORCE field chosen to remove sinks, and alluvium as boundary. Then, using Map Algebra - Raster Calculator, subtract the raster just created from 12_dw_olp_enf
Bioregional Assessment Programme (2017) Namoi bore analysis rasters. Bioregional Assessment Derived Dataset. Viewed 10 December 2018, http://data.bioregionalassessments.gov.au/dataset/7604087e-859c-4a92-8548-0aa274e8a226.
Derived From Bioregional Assessment areas v02
Derived From Gippsland Project boundary
Derived From Bioregional Assessment areas v04
Derived From Upper Namoi groundwater management zones
Derived From Natural Resource Management (NRM) Regions 2010
Derived From Bioregional Assessment areas v03
Derived From Victoria - Seamless Geology 2014
Derived From GIS analysis of HYDMEAS - Hydstra Groundwater Measurement Update: NSW Office of Water - Nov2013
Derived From Bioregional Assessment areas v01
Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb)
Derived From GEODATA TOPO 250K Series 3
Derived From NSW Catchment Management Authority Boundaries 20130917
Derived From Geological Provinces - Full Extent
Derived From Hydstra Groundwater Measurement Update - NSW Office of Water, Nov2013
The geospatial data presented here as ArcGIS layers denote landcover/landuse classifications to support field sampling efforts that occurred within the Cache Creek Settling Basin (CCSB) from 2010-2017. Manual photointerpretation of a National Agriculture Imagery Program (NAIP) dataset collected in 2012 was used to characterize landcover/landuse categories (hereafter habitat classes). Initially 9 categories were assigned based on vegetation structure (Vegtype1). These were then parsed into two levels of habitat classes that were chosen for their representativeness and use for statistical analyses of field sampling. At the coarsest level (Landcover 1), five habitat classes were assigned: Agriculture, Riparian, Floodplain, Open Water, and Road. At the more refined level (Landcover 2), ten habitat classes were nested within these five categories. Agriculture was not further refined within Landcover 2, as little consistency was expected between years as fields rotated between corn, pumpkin, tomatoes, and other row crops. Riparian habitat, marked by large canopy trees (such as Populus fremontii (cottonwood)) neighboring stream channels, also was not further refined. Floodplain habitat was separated into two categories: Mixed NonWoody (which included both Mowed and Barren habitats) and Mixed Woody. This separation of the floodplain habitat class (Landcover1) into Woody and NonWoody was performed with a 100 m2 moving window analysis in ArcGIS, where habitats were designated as either ≥50% shrub or tree cover (Woody) or <50%, and thus dominated by herbaceous vegetation cover (NonWoody). Open Water habitat was refined to consider both agricultural Canal (created) and Stream (natural) habitats. Road habitat was refined to separate Levee Roads (which included both the drivable portion and the apron on either side) and Interior roads, which were less managed. The map was tested for errors of omission and commission on the initial 9 categories during November 2014. Random points (n=100) were predetermined, and a total of 80 were selected for field verification. Type 1 (false positive) and Type 2 (false negative) errors were assessed. The survey indicated several corrections necessary in the final version of the map. 1) We noted the presence of woody species in “NonWoody” habitats, especially Baccharus salicilifolia (mulefat). Habitats were thus classified as “Woody” only with ≥50% presence of canopy species (e.g. tamarisk, black willow) 2) Riparian sites were over-characterized, and thus constrained back to “near stream channels only”. Walnut (Juglans spp) and willow stands alongside fields and irrigation canals were changed to Mixed Woody Floodplain. Fine tuning the final habitat distributions was thus based on field reconnaissance, scalar needs for classifying field data (sediment, water, bird, and fish collections), and validation of data categories using species observations from scientist field notes. Calibration was made using point data from the random survey and scientist field notes, to remove all sources of error and reach accuracy of 100%. The coverage “CCSB_Habitat_2012” is provided as an ARCGIS shapefile based on a suite of 7 interconnected ARCGIS files coded with the suffixes: cpg, dbf, sbn, sbx, shp, shx, and prj. Each file provides a component of the coverage (such as database or projection) and all files are necessary to open the “CCSB_Habitat_2012.shp” file with full functionality.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
This resource contains raster datasets created using ArcGIS to analyse groundwater levels in the Namoi subregion.
This is an update to some of the data that is registered here: http://data.bioregionalassessments.gov.au/dataset/7604087e-859c-4a92-8548-0aa274e8a226
These data layers were created in ArcGIS as part of the analysis to investigate surface water - groundwater connectivity in the Namoi subregion. The data layers provide several of the figures presented in the Namoi 2.1.5 Surface water - groundwater interactions report.
Extracted points inside Namoi subregion boundary. Converted bore and pipe values to Hydrocode format, changed heading of 'Value' column to 'Waterlevel' and removed unnecessary columns then joined to Updated_NSW_GroundWaterLevel_data_analysis_v01\NGIS_NSW_Bore_Join_Hydmeas_unique_bores.shp clipped to only include those bores within the Namoi subregion.
Selected only those bores with sample dates between >=26/4/2012 and <31/7/2012. Then removed 4 gauges due to anomalous ref_pt_height values or WaterElev values higher than Land_Elev values.
Then added new columns of calculations:
WaterElev = TsRefElev - Water_Leve
DepthWater = WaterElev - Ref_pt_height
Ref_pt_height = TsRefElev - LandElev
Alternatively - Selected only those bores with sample dates between >=1/5/2006 and <1/7/2006
2012_Wat_Elev - This raster was created by interpolating Water_Elev field points from HydmeasJune2012_only.shp, using Spatial Analyst - Topo to Raster tool. And using the alluvium boundary (NAM_113_Aquifer1_NamoiAlluviums.shp) as a boundary input source.
12_dw_olp_enf - Select out only those bores that are in both source files.
Then using depthwater in Topo to Raster, with alluvium as the boundary, ENFORCE field chosen, and using only those bores present in 2012 and 2006 dataset.
2012dw1km_alu - Clipped the 'watercourselines' layer to the Namoi Subregion, then selected 'Major' water courses only. Then used the Geoprocessing 'Buffer' tool to create a polygon delineating an area 1km around all the major streams in the Namoi subregion.
selected points from HydmeasJune2012_only.shp that were within 1km of features the WatercourseLines then used the selected points and the 1km buffer around the major water courses and the Topo to Raster tool in Spatial analyst to create the raster.
Then used the alluvium boundary to truncate the raster, to limit to the area of interest.
12_minus_06 - Select out bores from the 2006 dataset that are also in the 2012 dataset. Then create a raster using depth_water in topo to raster, with ENFORCE field chosen to remove sinks, and alluvium as boundary. Then, using Map Algebra - Raster Calculator, subtract the raster just created from 12_dw_olp_enf
Bioregional Assessment Programme (2017) Namoi bore analysis rasters - updated. Bioregional Assessment Derived Dataset. Viewed 10 December 2018, http://data.bioregionalassessments.gov.au/dataset/effa0039-ba15-459e-9211-232640609d44.
Derived From Bioregional Assessment areas v02
Derived From Gippsland Project boundary
Derived From Bioregional Assessment areas v04
Derived From Upper Namoi groundwater management zones
Derived From Natural Resource Management (NRM) Regions 2010
Derived From Bioregional Assessment areas v03
Derived From Victoria - Seamless Geology 2014
Derived From GIS analysis of HYDMEAS - Hydstra Groundwater Measurement Update: NSW Office of Water - Nov2013
Derived From Bioregional Assessment areas v01
Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb)
Derived From GEODATA TOPO 250K Series 3
Derived From NSW Catchment Management Authority Boundaries 20130917
Derived From Geological Provinces - Full Extent
Derived From Hydstra Groundwater Measurement Update - NSW Office of Water, Nov2013
DESCRIPTION OF ORIGINAL PARCELS DATASET HOSTED BY NJ OGIS: The statewide composite of parcels (cadastral) data for New Jersey is made available here in Web Mercator projection (3857.) It was developed during the Parcels Normalization Project in 2008-2014 by the NJ Office of Information Technology, Office of GIS (NJOGIS). The normalized parcels data are compatible with the New Jersey Department of Treasury MOD-IV system currently used by Tax Assessors and selected attributes from that system have been joined with the parcels in this dataset. Please see the NJGIN parcel dataset page for additional resources, including a downloadable zip file of the statewide data: https://njgin.nj.gov/njgin/edata/parcels/index.html#!/This composite of parcels data serves as one of New Jersey's framework GIS data sets. Stewardship and maintenance of the data will continue to be the purview of county and municipal governments, but the statewide composite will be maintained by NJOGIS.Parcel attributes were normalized to a standard structure, specified in the NJ GIS Parcel Mapping Standard, to store parcel information and provide a PIN (parcel identification number) field that can be used to match records with suitably-processed property tax data. The standard is available for viewing and download at https://njgin.state.nj.us/oit/gis/NJ_NJGINExplorer/docs/NJGIS_ParcelMappingStandardv3.2.pdf. The PIN also can be constructed from attributes available in the MOD-IV Tax List Search table (see below).This dataset includes a large number of additional attributes from matched MOD-IV records; however, not all MOD-IV records match to a parcel, for reasons explained elsewhere in this metadata record. The statewide property tax table, including all MOD-IV records, is available as a separate download "MOD-IV Tax List Search Plus Database of New Jersey." Users who need only the parcel boundaries with limited attributes may obtain those from a separate download "Parcels Composite of New Jersey ". Also available separately are countywide parcels and tables of property ownership and tax information extracted from the NJ Division of Taxation database.The polygons delineated in this dataset do not represent legal boundaries and should not be used to provide a legal determination of land ownership. Parcels are not survey data and should not be used as such. Please note that these parcel datasets are not intended for use as tax maps. They are intended to provide reasonable representations of parcel boundaries for planning and other purposes. Please see Data Quality / Process Steps for details about updates to this composite since its first publication.