The data represent web-scraping of hyperlinks from a selection of environmental stewardship organizations that were identified in the 2017 NYC Stewardship Mapping and Assessment Project (STEW-MAP) (USDA 2017). There are two data sets: 1) the original scrape containing all hyperlinks within the websites and associated attribute values (see "README" file); 2) a cleaned and reduced dataset formatted for network analysis. For dataset 1: Organizations were selected from from the 2017 NYC Stewardship Mapping and Assessment Project (STEW-MAP) (USDA 2017), a publicly available, spatial data set about environmental stewardship organizations working in New York City, USA (N = 719). To create a smaller and more manageable sample to analyze, all organizations that intersected (i.e., worked entirely within or overlapped) the NYC borough of Staten Island were selected for a geographically bounded sample. Only organizations with working websites and that the web scraper could access were retained for the study (n = 78). The websites were scraped between 09 and 17 June 2020 to a maximum search depth of ten using the snaWeb package (version 1.0.1, Stockton 2020) in the R computational language environment (R Core Team 2020). For dataset 2: The complete scrape results were cleaned, reduced, and formatted as a standard edge-array (node1, node2, edge attribute) for network analysis. See "READ ME" file for further details. References: R Core Team. (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. Version 4.0.3. Stockton, T. (2020). snaWeb Package: An R package for finding and building social networks for a website, version 1.0.1. USDA Forest Service. (2017). Stewardship Mapping and Assessment Project (STEW-MAP). New York City Data Set. Available online at https://www.nrs.fs.fed.us/STEW-MAP/data/. This dataset is associated with the following publication: Sayles, J., R. Furey, and M. Ten Brink. How deep to dig: effects of web-scraping search depth on hyperlink network analysis of environmental stewardship organizations. Applied Network Science. Springer Nature, New York, NY, 7: 36, (2022).
Are you looking to identify B2B leads to promote your business, product, or service? Outscraper Google Maps Scraper might just be the tool you've been searching for. This powerful software enables you to extract business data directly from Google's extensive database, which spans millions of businesses across countless industries worldwide.
Outscraper Google Maps Scraper is a tool built with advanced technology that lets you scrape a myriad of valuable information about businesses from Google's database. This information includes but is not limited to, business names, addresses, contact information, website URLs, reviews, ratings, and operational hours.
Whether you are a small business trying to make a mark or a large enterprise exploring new territories, the data obtained from the Outscraper Google Maps Scraper can be a treasure trove. This tool provides a cost-effective, efficient, and accurate method to generate leads and gather market insights.
By using Outscraper, you'll gain a significant competitive edge as it allows you to analyze your market and find potential B2B leads with precision. You can use this data to understand your competitors' landscape, discover new markets, or enhance your customer database. The tool offers the flexibility to extract data based on specific parameters like business category or geographic location, helping you to target the most relevant leads for your business.
In a world that's growing increasingly data-driven, utilizing a tool like Outscraper Google Maps Scraper could be instrumental to your business' success. If you're looking to get ahead in your market and find B2B leads in a more efficient and precise manner, Outscraper is worth considering. It streamlines the data collection process, allowing you to focus on what truly matters – using the data to grow your business.
https://outscraper.com/google-maps-scraper/
As a result of the Google Maps scraping, your data file will contain the following details:
Query Name Site Type Subtypes Category Phone Full Address Borough Street City Postal Code State Us State Country Country Code Latitude Longitude Time Zone Plus Code Rating Reviews Reviews Link Reviews Per Scores Photos Count Photo Street View Working Hours Working Hours Old Format Popular Times Business Status About Range Posts Verified Owner ID Owner Title Owner Link Reservation Links Booking Appointment Link Menu Link Order Links Location Link Place ID Google ID Reviews ID
If you want to enrich your datasets with social media accounts and many more details you could combine Google Maps Scraper with Domain Contact Scraper.
Domain Contact Scraper can scrape these details:
Email Facebook Github Instagram Linkedin Phone Twitter Youtube
In 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within California’s State Waters. The program supports a large number of coastal-zone- and ocean-management issues, including the California Marine Life Protection Act (MLPA) (California Department of Fish and Wildlife, 2008), which requires information about the distribution of ecosystems as part of the design and proposal process for the establishment of Marine Protected Areas. A focus of CSMP is to map California’s State Waters with consistent methods at a consistent scale. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data (the undersea equivalent of satellite remote-sensing data in terrestrial mapping), acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. It is emphasized that the more interpretive habitat and geology data rely on the integration of multiple, new high-resolution datasets and that mapping at small scales would not be possible without such data. This approach and CSMP planning is based in part on recommendations of the Marine Mapping Planning Workshop (Kvitek and others, 2006), attended by coastal and marine managers and scientists from around the state. That workshop established geographic priorities for a coastal mapping project and identified the need for coverage of “lands” from the shore strand line (defined as Mean Higher High Water; MHHW) out to the 3-nautical-mile (5.6-km) limit of California’s State Waters. Unfortunately, surveying the zone from MHHW out to 10-m water depth is not consistently possible using ship-based surveying methods, owing to sea state (for example, waves, wind, or currents), kelp coverage, and shallow rock outcrops. Accordingly, some of the data presented in this series commonly do not cover the zone from the shore out to 10-m depth. This data is part of a series of online U.S. Geological Survey (USGS) publications, each of which includes several map sheets, some explanatory text, and a descriptive pamphlet. Each map sheet is published as a PDF file. Geographic information system (GIS) files that contain both ESRI ArcGIS raster grids (for example, bathymetry, seafloor character) and geotiffs (for example, shaded relief) are also included for each publication. For those who do not own the full suite of ESRI GIS and mapping software, the data can be read using ESRI ArcReader, a free viewer that is available at http://www.esri.com/software/arcgis/arcreader/index.html (last accessed September 20, 2013). The California Seafloor Mapping Program is a collaborative venture between numerous different federal and state agencies, academia, and the private sector. CSMP partners include the California Coastal Conservancy, the California Ocean Protection Council, the California Department of Fish and Wildlife, the California Geological Survey, California State University at Monterey Bay’s Seafloor Mapping Lab, Moss Landing Marine Laboratories Center for Habitat Studies, Fugro Pelagos, Pacific Gas and Electric Company, National Oceanic and Atmospheric Administration (NOAA, including National Ocean Service–Office of Coast Surveys, National Marine Sanctuaries, and National Marine Fisheries Service), U.S. Army Corps of Engineers, the Bureau of Ocean Energy Management, the National Park Service, and the U.S. Geological Survey. These web services for the Offshore of Point Conception map area includes data layers that are associated to GIS and map sheets available from the USGS CSMP web page at https://walrus.wr.usgs.gov/mapping/csmp/index.html. Each published CSMP map area includes a data catalog of geographic information system (GIS) files; map sheets that contain explanatory text; and an associated descriptive pamphlet. This web service represents the available data layers for this map area. Data was combined from different sonar surveys to generate a comprehensive high-resolution bathymetry and acoustic-backscatter coverage of the map area. These data reveal a range of physiographic including exposed bedrock outcrops, large fields of sand waves, as well as many human impacts on the seafloor. To validate geological and biological interpretations of the sonar data, the U.S. Geological Survey towed a camera sled over specific offshore locations, collecting both video and photographic imagery; these “ground-truth” surveying data are available from the CSMP Video and Photograph Portal at https://doi.org/10.5066/F7J1015K. The “seafloor character” data layer shows classifications of the seafloor on the basis of depth, slope, rugosity (ruggedness), and backscatter intensity and which is further informed by the ground-truth-survey imagery. The “potential habitats” polygons are delineated on the basis of substrate type, geomorphology, seafloor process, or other attributes that may provide a habitat for a specific species or assemblage of organisms. Representative seismic-reflection profile data from the map area is also include and provides information on the subsurface stratigraphy and structure of the map area. The distribution and thickness of young sediment (deposited over the past about 21,000 years, during the most recent sea-level rise) is interpreted on the basis of the seismic-reflection data. The geologic polygons merge onshore geologic mapping (compiled from existing maps by the California Geological Survey) and new offshore geologic mapping that is based on integration of high-resolution bathymetry and backscatter imagery seafloor-sediment and rock samplesdigital camera and video imagery, and high-resolution seismic-reflection profiles. The information provided by the map sheets, pamphlet, and data catalog has a broad range of applications. High-resolution bathymetry, acoustic backscatter, ground-truth-surveying imagery, and habitat mapping all contribute to habitat characterization and ecosystem-based management by providing essential data for delineation of marine protected areas and ecosystem restoration. Many of the maps provide high-resolution baselines that will be critical for monitoring environmental change associated with climate change, coastal development, or other forcings. High-resolution bathymetry is a critical component for modeling coastal flooding caused by storms and tsunamis, as well as inundation associated with longer term sea-level rise. Seismic-reflection and bathymetric data help characterize earthquake and tsunami sources, critical for natural-hazard assessments of coastal zones. Information on sediment distribution and thickness is essential to the understanding of local and regional sediment transport, as well as the development of regional sediment-management plans. In addition, siting of any new offshore infrastructure (for example, pipelines, cables, or renewable-energy facilities) will depend on high-resolution mapping. Finally, this mapping will both stimulate and enable new scientific research and also raise public awareness of, and education about, coastal environments and issues. Web services were created using an ArcGIS service definition file. The ArcGIS REST service and OGC WMS service include all Offshore of Point Conception map area data layers. Data layers are symbolized as shown on the associated map sheets.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The SSURGO database contains information about soil as collected by the National Cooperative Soil Survey over the course of a century. The information can be displayed in tables or as maps and is available for most areas in the United States and the Territories, Commonwealths, and Island Nations served by the USDA-NRCS (Natural Resources Conservation Service). The information was gathered by walking over the land and observing the soil. Many soil samples were analyzed in laboratories. The maps outline areas called map units. The map units describe soils and other components that have unique properties, interpretations, and productivity. The information was collected at scales ranging from 1:12,000 to 1:63,360. More details were gathered at a scale of 1:12,000 than at a scale of 1:63,360. The mapping is intended for natural resource planning and management by landowners, townships, and counties. Some knowledge of soils data and map scale is necessary to avoid misunderstandings. The maps are linked in the database to information about the component soils and their properties for each map unit. Each map unit may contain one to three major components and some minor components. The map units are typically named for the major components. Examples of information available from the database include available water capacity, soil reaction, electrical conductivity, and frequency of flooding; yields for cropland, woodland, rangeland, and pastureland; and limitations affecting recreational development, building site development, and other engineering uses. SSURGO datasets consist of map data, tabular data, and information about how the maps and tables were created. The extent of a SSURGO dataset is a soil survey area, which may consist of a single county, multiple counties, or parts of multiple counties. SSURGO map data can be viewed in the Web Soil Survey or downloaded in ESRI® Shapefile format. The coordinate systems are Geographic. Attribute data can be downloaded in text format that can be imported into a Microsoft® Access® database. A complete SSURGO dataset consists of:
GIS data (as ESRI® Shapefiles) attribute data (dbf files - a multitude of separate tables) database template (MS Access format - this helps with understanding the structure and linkages of the various tables) metadata
Resources in this dataset:Resource Title: SSURGO Metadata - Tables and Columns Report. File Name: SSURGO_Metadata_-_Tables_and_Columns.pdfResource Description: This report contains a complete listing of all columns in each database table. Please see SSURGO Metadata - Table Column Descriptions Report for more detailed descriptions of each column.
Find the Soil Survey Geographic (SSURGO) web site at https://www.nrcs.usda.gov/wps/portal/nrcs/detail/vt/soils/?cid=nrcs142p2_010596#Datamart Title: SSURGO Metadata - Table Column Descriptions Report. File Name: SSURGO_Metadata_-_Table_Column_Descriptions.pdfResource Description: This report contains the descriptions of all columns in each database table. Please see SSURGO Metadata - Tables and Columns Report for a complete listing of all columns in each database table.
Find the Soil Survey Geographic (SSURGO) web site at https://www.nrcs.usda.gov/wps/portal/nrcs/detail/vt/soils/?cid=nrcs142p2_010596#Datamart Title: SSURGO Data Dictionary. File Name: SSURGO 2.3.2 Data Dictionary.csvResource Description: CSV version of the data dictionary
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Aquatic environmental DNA (eDNA) sampling is the collection of DNA released by a target species into streams, rivers, ponds, lakes, and wetlands. Detection of stream fish with eDNA can be remarkably sensitive—100% detection efficiency of target species has been achieved despite order-of-magnitude changes in stream discharge. The eDNA samples in the eDNAtlas database describe species occurrence locations and were collected by the U.S. Forest Service and numerous agencies that have partnered with the National Genomics Center for Wildlife and Fish Conservation (NGC) throughout the United States. The data were collected for a variety of project-specific purposes that included: species status assessments, trend monitoring at one or many sites, development of predictive species distribution models, detection and tracking of non-native species invasions, and assessments of habitat restoration efforts. The eDNAtlas database consists of two feature classes. The first component (eDNAtlas_West_AGOL_ResultsOnly) is a database of georeferenced species occurrence locations based on eDNA field sampling results, which are downloadable by species through a dynamic ArcGIS Online (AGOL) mapping tool. The earliest eDNA samples in the database were collected in 2015 but new samples and results are added annually to the database, which houses thousands of species occurrence records. The second component (eDNAtlas_West_SampleGridAndResults) is a systematically-spaced 1-kilometer grid of potential sample points in streams and rivers throughout the western United States. Future versions will include the eastern United States as well. The points in the sampling grid are arrayed along the medium-resolution National Hydrography Dataset Version 2 (NHDPlusV2) and can be used to develop custom eDNA sampling strategies for many purposes. Each sample point has a unique identity code that enables efficient integration of processed eDNA sample results with the species occurrence database. The eDNAtlas is accessed via an interactive ArcGIS Online (AGOL) map that allows users to view and download sample site information and lab results of species occurrence for the U.S. The results are primarily based on samples analyzed at the National Genomics Center for Wildlife and Fish Conservation (NGC) and associated with geospatial attributes created by the Boise Spatial Streams Group (BSSG). The AGOL map displays results for all species sampled within an 8-digit USGS hydrologic unit or series of units. The map initially opens to the project extent, but allows users to zoom to areas of interest. Symbols indicate whether a field sample has been collected and processed at a specific location, and if the latter, whether the target species was present. Each flowing-water site is assigned a unique identification code in the database to ensure that it can be tracked and matched to geospatial habitat descriptors or other attributes for subsequent analyses and reports. Because no comparable database has been built for standing water, results for those sites lack this additional information but still provide data on the sample and species detected. Resources in this dataset:Resource Title: The Aquatic eDNAtlas Project: Lab Results Map - USFS RMRS. File Name: Web Page, url: https://usfs.maps.arcgis.com/apps/webappviewer/index.html?id=b496812d1a8847038687ff1328c481fa The eDNAtlas is accessed via an interactive ArcGIS Online (AGOL) map that allows users to view and download sample site information and lab results of species occurrence for the U.S. The results are primarily based on samples analyzed at the National Genomics Center for Wildlife and Fish Conservation (NGC) and associated with geospatial attributes created by the Boise Spatial Streams Group (BSSG). The AGOL map displays results for all species sampled within an 8-digit USGS hydrologic unit or series of units. The map initially opens to the project extent, but allows users to zoom to areas of interest. Symbols indicate whether a field sample has been collected and processed at a specific location, and if the latter, whether the target species was present. Each flowing-water site is assigned a unique identification code in the database to ensure that it can be tracked and matched to geospatial habitat descriptors or other attributes for subsequent analyses and reports. Because no comparable database has been built for standing water, results for those sites lack this additional information but still provide data on the sample and species detected. For details on using the map see the Aquatic eDNAtlas Project: Lab Results ArcGIS Online Map Guide.
The Aquatic eDNAtlas projectEffective conservation and management of freshwater biota during an era of rapid climate change, nonnative species invasions, and habitat loss, as well as widespread efforts to maintain, restore, and expand the distributions of at-risk species, requires precise information about species distributions across broad areas to guide decision-making. Environmental DNA (eDNA) sampling of aquatic environments offers a reliable, cost-effective, and sensitive means of determining species presence if samples are collected following rigorous field protocols (Carim et al. 2016a) and analyzed using properly designed eDNA assays (Wilcox et al. 2015a). Because of its advantages relative to traditional sampling techniques, eDNA sampling is being rapidly adopted to address questions about the distribution of species in headwater streams (McKelvey et al. 2016) ), the success of nonnative species removals, and the rangewide patterns of occupancy by individual species (Rangewide bull trout eDNA project). To foster these efforts, the National Genomics Center for Wildlife and Fish Conservation (NGC) partners with dozens of natural resource organizations throughout North America to provide technical assistance in the form of eDNA assay development and field sampling designs for fish, amphibians, crustaceans, mussels, mammals, and birds. Samples are collected at thousands of sites annually through those partnerships and analyzed at the NGC, which has created a large database that is rapidly growing in geographic extent and species diversity. To facilitate access to the NGC database in spatially-explicit formats that maximize the use and sharing of eDNA sampling results, as well as the efficient collection of new samples, the National Fish and Wildlife Foundation commissioned the Aquatic eDNAtlas project and website. This website provides information about: 1) the science behind eDNA sampling, 2) the recommended field protocol for eDNA sampling and the equipment loan program administered by the NGC, 3) a systematically-spaced sampling grid for all flowing waters of the U.S. in a downloadable format that includes unique database identifiers and geographic coordinates for all sampling sites, and 4) the results of eDNA sampling at those sites where project partners have agreed to share data.If you have questions about the eDNAtlas project or are interested in partnering with the NGC to build eDNA assays for your species or to conduct aquatic species surveys, please visit the website and contact us.Map Author: Sharon (Parkes) Payne; USDA Forest Service; Rocky Mountain Research Station; Water and Watersheds Program
This web application shows predetermined field sampling sites in a systematically-spaced sampling grid for all flowing waters in the western US. Use this map to plan your field work and determine the best streams to take field samples for specific species of interest.Please visit the website for more information, supporting science, species list, and sampling protocols.https://www.fs.usda.gov/research/rmrs/projects/ednatlasPlease be aware that workflows have been modified and samples that have been taken are only included in the geodatabase if the contributor has given explicit permission to include them. (Which means that a few points in previous versions listed as 'sampled', will now be listed as 'not sampled'.)
This table provides coordinates along with some basic abiotic characteristics of the sampling location. Water temperature (°C) is collected at the surface and water depth is the average depth (feet) observed during the sampling event. Some abiotic fields may have null values.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m and the figure included on this landing page provides a flow chart illustrating the four different neural network-based depth retrieval methods. As examples of the resulting models, MATLAB *.mat data files containing the best-performing neural network model for each site are provided below, along with a file that lists the PlanetScope image identifiers for the images that were used for each site. To develop and test this new NNDR approach, the method was applied to satellite images from three rivers across the U.S.: the American, Colorado, and Potomac. For each site, field measurements of water depth available through other data releases were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: X_mean-spec.tif, X_mean-depth.tif, X_NN-depth.tif, and X-single-image.tif, where X denotes the site name. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
To create this layer, OCTO staff used ABCA's definition of “Full-Service Grocery Stores” (https://abca.dc.gov/page/full-service-grocery-store#gsc.tab=0)– pulled from the Food System Assessment below), and using those criteria, determined locations that fulfilled the categories in section 1 of the definition.Then, staff reviewed the Office of Planning’s Food System Assessment (https://dcfoodpolicycouncilorg.files.wordpress.com/2019/06/2018-food-system-assessment-final-6.13.pdf) list in Appendix D, comparing that to the created from the ABCA definition, which led to the addition of a additional examples that meet, or come very close to, the full-service grocery store criteria. The explanation from Office of Planning regarding how the agency created their list:“To determine the number of grocery stores in the District, we analyzed existing business licenses in the Department of Consumer and Regulatory Affairs (2018) Business License Verification system (located at https://eservices.dcra.dc.gov/BBLV/Default.aspx). To distinguish grocery stores from convenience stores, we applied the Alcohol Beverage and Cannabis Administration’s (ABCA) definition of a full-service grocery store. This definition requires a store to be licensed as a grocery store, sell at least six different food categories, dedicate either 50% of the store’s total square feet or 6,000 square feet to selling food, and dedicate at least 5% of the selling area to each food category. This definition can be found at https://abca.dc.gov/page/full-service-grocery-store#gsc.tab=0. To distinguish small grocery stores from large grocery stores, we categorized large grocery stores as those 10,000 square feet or more. This analysis was conducted using data from the WDCEP’s Retail and Restaurants webpage (located at https://wdcep.com/dc-industries/retail/) and using ARCGIS Spatial Analysis tools when existing data was not available. Our final numbers differ slightly from existing reports like the DC Hunger Solutions’ Closing the Grocery Store Gap and WDCEP’s Grocery Store Opportunities Map; this difference likely comes from differences in our methodology and our exclusion of stores that have closed.”Staff also conducted a visual analysis of locations and relied on personal experience of visits to locations to determine whether they should be included in the list.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The data represent web-scraping of hyperlinks from a selection of environmental stewardship organizations that were identified in the 2017 NYC Stewardship Mapping and Assessment Project (STEW-MAP) (USDA 2017). There are two data sets: 1) the original scrape containing all hyperlinks within the websites and associated attribute values (see "README" file); 2) a cleaned and reduced dataset formatted for network analysis. For dataset 1: Organizations were selected from from the 2017 NYC Stewardship Mapping and Assessment Project (STEW-MAP) (USDA 2017), a publicly available, spatial data set about environmental stewardship organizations working in New York City, USA (N = 719). To create a smaller and more manageable sample to analyze, all organizations that intersected (i.e., worked entirely within or overlapped) the NYC borough of Staten Island were selected for a geographically bounded sample. Only organizations with working websites and that the web scraper could access were retained for the study (n = 78). The websites were scraped between 09 and 17 June 2020 to a maximum search depth of ten using the snaWeb package (version 1.0.1, Stockton 2020) in the R computational language environment (R Core Team 2020). For dataset 2: The complete scrape results were cleaned, reduced, and formatted as a standard edge-array (node1, node2, edge attribute) for network analysis. See "READ ME" file for further details. References: R Core Team. (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. Version 4.0.3. Stockton, T. (2020). snaWeb Package: An R package for finding and building social networks for a website, version 1.0.1. USDA Forest Service. (2017). Stewardship Mapping and Assessment Project (STEW-MAP). New York City Data Set. Available online at https://www.nrs.fs.fed.us/STEW-MAP/data/. This dataset is associated with the following publication: Sayles, J., R. Furey, and M. Ten Brink. How deep to dig: effects of web-scraping search depth on hyperlink network analysis of environmental stewardship organizations. Applied Network Science. Springer Nature, New York, NY, 7: 36, (2022).