https://brightdata.com/licensehttps://brightdata.com/license
The Google Maps dataset is ideal for getting extensive information on businesses anywhere in the world. Easily filter by location, business type, and other factors to get the exact data you need. The Google Maps dataset includes all major data points: timestamp, name, category, address, description, open website, phone number, open_hours, open_hours_updated, reviews_count, rating, main_image, reviews, url, lat, lon, place_id, country, and more.
Welcome to Apiscrapy, your ultimate destination for comprehensive location-based intelligence. As an AI-driven web scraping and automation platform, Apiscrapy excels in converting raw web data into polished, ready-to-use data APIs. With a unique capability to collect Google Address Data, Google Address API, Google Location API, Google Map, and Google Location Data with 100% accuracy, we redefine possibilities in location intelligence.
Key Features:
Unparalleled Data Variety: Apiscrapy offers a diverse range of address-related datasets, including Google Address Data and Google Location Data. Whether you seek B2B address data or detailed insights for various industries, we cover it all.
Integration with Google Address API: Seamlessly integrate our datasets with the powerful Google Address API. This collaboration ensures not just accessibility but a robust combination that amplifies the precision of your location-based insights.
Business Location Precision: Experience a new level of precision in business decision-making with our address data. Apiscrapy delivers accurate and up-to-date business locations, enhancing your strategic planning and expansion efforts.
Tailored B2B Marketing: Customize your B2B marketing strategies with precision using our detailed B2B address data. Target specific geographic areas, refine your approach, and maximize the impact of your marketing efforts.
Use Cases:
Location-Based Services: Companies use Google Address Data to provide location-based services such as navigation, local search, and location-aware advertisements.
Logistics and Transportation: Logistics companies utilize Google Address Data for route optimization, fleet management, and delivery tracking.
E-commerce: Online retailers integrate address autocomplete features powered by Google Address Data to simplify the checkout process and ensure accurate delivery addresses.
Real Estate: Real estate agents and property websites leverage Google Address Data to provide accurate property listings, neighborhood information, and proximity to amenities.
Urban Planning and Development: City planners and developers utilize Google Address Data to analyze population density, traffic patterns, and infrastructure needs for urban planning and development projects.
Market Analysis: Businesses use Google Address Data for market analysis, including identifying target demographics, analyzing competitor locations, and selecting optimal locations for new stores or offices.
Geographic Information Systems (GIS): GIS professionals use Google Address Data as a foundational layer for mapping and spatial analysis in fields such as environmental science, public health, and natural resource management.
Government Services: Government agencies utilize Google Address Data for census enumeration, voter registration, tax assessment, and planning public infrastructure projects.
Tourism and Hospitality: Travel agencies, hotels, and tourism websites incorporate Google Address Data to provide location-based recommendations, itinerary planning, and booking services for travelers.
Discover the difference with Apiscrapy – where accuracy meets diversity in address-related datasets, including Google Address Data, Google Address API, Google Location API, and more. Redefine your approach to location intelligence and make data-driven decisions with confidence. Revolutionize your business strategies today!
APISCRAPY, your premier provider of Map Data solutions. Map Data encompasses various information related to geographic locations, including Google Map Data, Location Data, Address Data, and Business Location Data. Our advanced Google Map Data Scraper sets us apart by extracting comprehensive and accurate data from Google Maps and other platforms.
What sets APISCRAPY's Map Data apart are its key benefits:
Accuracy: Our scraping technology ensures the highest level of accuracy, providing reliable data for informed decision-making. We employ advanced algorithms to filter out irrelevant or outdated information, ensuring that you receive only the most relevant and up-to-date data.
Accessibility: With our data readily available through APIs, integration into existing systems is seamless, saving time and resources. Our APIs are easy to use and well-documented, allowing for quick implementation into your workflows. Whether you're a developer building a custom application or a business analyst conducting market research, our APIs provide the flexibility and accessibility you need.
Customization: We understand that every business has unique needs and requirements. That's why we offer tailored solutions to meet specific business needs. Whether you need data for a one-time project or ongoing monitoring, we can customize our services to suit your needs. Our team of experts is always available to provide support and guidance, ensuring that you get the most out of our Map Data solutions.
Our Map Data solutions cater to various use cases:
B2B Marketing: Gain insights into customer demographics and behavior for targeted advertising and personalized messaging. Identify potential customers based on their geographic location, interests, and purchasing behavior.
Logistics Optimization: Utilize Location Data to optimize delivery routes and improve operational efficiency. Identify the most efficient routes based on factors such as traffic patterns, weather conditions, and delivery deadlines.
Real Estate Development: Identify prime locations for new ventures using Business Location Data for market analysis. Analyze factors such as population density, income levels, and competition to identify opportunities for growth and expansion.
Geospatial Analysis: Leverage Map Data for spatial analysis, urban planning, and environmental monitoring. Identify trends and patterns in geographic data to inform decision-making in areas such as land use planning, resource management, and disaster response.
Retail Expansion: Determine optimal locations for new stores or franchises using Location Data and Address Data. Analyze factors such as foot traffic, proximity to competitors, and demographic characteristics to identify locations with the highest potential for success.
Competitive Analysis: Analyze competitors' business locations and market presence for strategic planning. Identify areas of opportunity and potential threats to your business by analyzing competitors' geographic footprint, market share, and customer demographics.
Experience the power of APISCRAPY's Map Data solutions today and unlock new opportunities for your business. With our accurate and accessible data, you can make informed decisions, drive growth, and stay ahead of the competition.
[ Related tags: Map Data, Google Map Data, Google Map Data Scraper, B2B Marketing, Location Data, Map Data, Google Data, Location Data, Address Data, Business location data, map scraping data, Google map data extraction, Transport and Logistic Data, Mobile Location Data, Mobility Data, and IP Address Data, business listings APIs, map data, map datasets, map APIs, poi dataset, GPS, Location Intelligence, Retail Site Selection, Sentiment Analysis, Marketing Data Enrichment, Point of Interest (POI) Mapping]
We are developing a set of NASA Extensions to the Google Maps API—and soon to other frameworks such as OpenLayers as well—that will make these platforms more useful to NASA scientists and our colleagues elsewhere.
Are you looking to identify B2B leads to promote your business, product, or service? Outscraper Google Maps Scraper might just be the tool you've been searching for. This powerful software enables you to extract business data directly from Google's extensive database, which spans millions of businesses across countless industries worldwide.
Outscraper Google Maps Scraper is a tool built with advanced technology that lets you scrape a myriad of valuable information about businesses from Google's database. This information includes but is not limited to, business names, addresses, contact information, website URLs, reviews, ratings, and operational hours.
Whether you are a small business trying to make a mark or a large enterprise exploring new territories, the data obtained from the Outscraper Google Maps Scraper can be a treasure trove. This tool provides a cost-effective, efficient, and accurate method to generate leads and gather market insights.
By using Outscraper, you'll gain a significant competitive edge as it allows you to analyze your market and find potential B2B leads with precision. You can use this data to understand your competitors' landscape, discover new markets, or enhance your customer database. The tool offers the flexibility to extract data based on specific parameters like business category or geographic location, helping you to target the most relevant leads for your business.
In a world that's growing increasingly data-driven, utilizing a tool like Outscraper Google Maps Scraper could be instrumental to your business' success. If you're looking to get ahead in your market and find B2B leads in a more efficient and precise manner, Outscraper is worth considering. It streamlines the data collection process, allowing you to focus on what truly matters – using the data to grow your business.
https://outscraper.com/google-maps-scraper/
As a result of the Google Maps scraping, your data file will contain the following details:
Query Name Site Type Subtypes Category Phone Full Address Borough Street City Postal Code State Us State Country Country Code Latitude Longitude Time Zone Plus Code Rating Reviews Reviews Link Reviews Per Scores Photos Count Photo Street View Working Hours Working Hours Old Format Popular Times Business Status About Range Posts Verified Owner ID Owner Title Owner Link Reservation Links Booking Appointment Link Menu Link Order Links Location Link Place ID Google ID Reviews ID
If you want to enrich your datasets with social media accounts and many more details you could combine Google Maps Scraper with Domain Contact Scraper.
Domain Contact Scraper can scrape these details:
Email Facebook Github Instagram Linkedin Phone Twitter Youtube
This dataset provides comprehensive local business and point of interest (POI) data from Google Maps in real-time. It includes detailed business information such as addresses, websites, phone numbers, emails, ratings, reviews, business hours, and over 40 additional data points. Perfect for applications requiring local business data (b2b lead generation, b2b marketing), store locators, and business directories. The dataset is delivered in a JSON format via REST API.
Outscraper's Global Location Data service is an advanced solution for harnessing location-based data from Google Maps. Equipped with features such as worldwide coverage, precise filtering, and a plethora of data fields, Outscraper is your reliable source of fresh and accurate data.
Outscraper's Global Location Data Service leverages the extensive data accessible via Google Maps to deliver critical location data on a global scale. This service offers a robust solution for your global intelligence needs, utilizing cutting-edge technology to collect and analyze data from Google Maps and create accurate and relevant location datasets. The service is supported by a constant stream of reliable and current data, powered by Outscraper's advanced web scraping technology, guaranteeing that the data pulled from Google Maps is both fresh and accurate.
One of the key features of Outscraper's Global Location Data Service is its advanced filtering capabilities, allowing you to extract only the location data you need. This means you can specify particular categories, locations, and other criteria to obtain the most pertinent and valuable data for your business requirements, eliminating the need to sort through irrelevant records.
With Outscraper, you gain worldwide coverage for your location data needs. The service's advanced data scraping technology lets you collect data from any country and city without restrictions, making it an indispensable tool for businesses operating on a global scale or those looking to expand internationally. Outscraper provides a wealth of data, offering an unmatched number of fields to compile and enrich your location data. With over 40 data fields, you can generate comprehensive and detailed datasets that offer deep insights into your areas of interest.
The global reach of this service spans across Africa, Asia, and Europe, covering over 150 countries, including but not limited to Zimbabwe in Africa, Yemen in Asia, and Slovenia in Europe. This broad coverage ensures that no matter where your business operations or interests lie, you will have access to the location data you need.
Experience the Outscraper difference today and elevate your location data analysis to the next level.
GapMaps Live is an easy-to-use location intelligence platform available across 25 countries globally that allows you to visualise your own store data, combined with the latest demographic, economic and population movement intel right down to the micro level so you can make faster, smarter and surer decisions when planning your network growth strategy.
With one single login, you can access the latest estimates on resident and worker populations, census metrics (eg. age, income, ethnicity), consuming class, retail spend insights and point-of-interest data across a range of categories including fast food, cafe, fitness, supermarket/grocery and more.
Some of the world's biggest brands including McDonalds, Subway, Burger King, Anytime Fitness and Dominos use GapMaps Live Map Data as a vital strategic tool where business success relies on up-to-date, easy to understand, location intel that can power business case validation and drive rapid decision making.
Primary Use Cases for GapMaps Live Map Data include:
Some of features our clients love about GapMaps Live Map Data include: - View business locations, competitor locations, demographic, economic and social data around your business or selected location - Understand consumer visitation patterns (“where from” and “where to”), frequency of visits, dwell time of visits, profiles of consumers and much more. - Save searched locations and drop pins - Turn on/off all location listings by category - View and filter data by metadata tags, for example hours of operation, contact details, services provided - Combine public data in GapMaps with views of private data Layers - View data in layers to understand impact of different data Sources - Share maps with teams - Generate demographic reports and comparative analyses on different locations based on drive time, walk time or radius. - Access multiple countries and brands with a single logon - Access multiple brands under a parent login - Capture field data such as photos, notes and documents using GapMaps Connect and integrate with GapMaps Live to get detailed insights on existing and proposed store locations.
Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5
If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD
The following text is a summary of the information in the above Data Descriptor.
The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.
The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.
These maps represent a unique global representation of physical access to essential services offered by cities and ports.
The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).
travel_time_to_ports_x (x ranges from 1 to 5)
The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.
Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes
Data type Byte (16 bit Unsigned Integer)
No data value 65535
Flags None
Spatial resolution 30 arc seconds
Spatial extent
Upper left -180, 85
Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)
Temporal resolution 2015
Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.
Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.
The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.
Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points
The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).
Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.
Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.
This process and results are included in the validation zip file.
Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.
The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.
The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.
The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive reproduces a figure titled "Figure 3.2 Boone County population distribution" from Wang and vom Hofe (2007, p.60). The archive provides a Jupyter Notebook that uses Python and can be run in Google Colaboratory. The workflow uses the Census API to retrieve data, reproduce the figure, and ensure reproducibility for anyone accessing this archive.The Python code was developed in Google Colaboratory, or Google Colab for short, which is an Integrated Development Environment (IDE) of JupyterLab and streamlines package installation, code collaboration, and management. The Census API is used to obtain population counts from the 2000 Decennial Census (Summary File 1, 100% data). Shapefiles are downloaded from the TIGER/Line FTP Server. All downloaded data are maintained in the notebook's temporary working directory while in use. The data and shapefiles are stored separately with this archive. The final map is also stored as an HTML file.The notebook features extensive explanations, comments, code snippets, and code output. The notebook can be viewed in a PDF format or downloaded and opened in Google Colab. References to external resources are also provided for the various functional components. The notebook features code that performs the following functions:install/import necessary Python packagesdownload the Census Tract shapefile from the TIGER/Line FTP Serverdownload Census data via CensusAPI manipulate Census tabular data merge Census data with TIGER/Line shapefileapply a coordinate reference systemcalculate land area and population densitymap and export the map to HTMLexport the map to ESRI shapefileexport the table to CSVThe notebook can be modified to perform the same operations for any county in the United States by changing the State and County FIPS code parameters for the TIGER/Line shapefile and Census API downloads. The notebook can be adapted for use in other environments (i.e., Jupyter Notebook) as well as reading and writing files to a local or shared drive, or cloud drive (i.e., Google Drive).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Documented March 19, 2023
!!NEW!!!
GeoDAR reservoirs were registered to the drainage network! Please see the auxiliary data "GeoDAR-TopoCat" at https://zenodo.org/records/7750736. "GeoDAR-TopoCat" contains the drainage topology (reaches and upstream/downstream relationships) and catchment boundary for each reservoir in GeoDAR, based on the algorithm used for Lake-TopoCat (doi:10.5194/essd-15-3483-2023).
Documented April 1, 2022
Citation
Wang, J., Walter, B. A., Yao, F., Song, C., Ding, M., Maroof, A. S., Zhu, J., Fan, C., McAlister, J. M., Sikder, M. S., Sheng, Y., Allen, G. H., Crétaux, J.-F., and Wada, Y.: GeoDAR: georeferenced global dams and reservoirs database for bridging attributes and geolocations. Earth System Science Data, 14, 1869–1899, 2022, https://doi.org/10.5194/essd-14-1869-2022.
Please cite the reference above (which was fully peer-reviewed), NOT the preprint version. Thank you.
Contact
Dr. Jida Wang, jidawang@ksu.edu, gdbruins@ucla.edu
Data description and components
Data folder “GeoDAR_v10_v11” (.zip) contains two consecutive, peer-reviewed versions (v1.0 and v1.1) of the Georeferenced global Dams And Reservoirs (GeoDAR) dataset:
As by-products of GeoDAR harmonization, folder “GeoDAR_v10_v11” also contains:
Attribute description
Attribute |
Description and values |
v1.0 dams (file name: GeoDAR_v10_dams; format: comma-separated values (csv) and point shapefile) | |
id_v10 |
Dam ID for GeoDAR version 1.0 (type: integer). Note this is not the same as the International Code in ICOLD WRD but is linked to the International Code via encryption. |
lat |
Latitude of the dam point in decimal degree (type: float) based on datum World Geodetic System (WGS) 1984. |
lon |
Longitude of the dam point in decimal degree (type: float) on WGS 1984. |
geo_mtd |
Georeferencing method (type: text). Unique values include “geo-matching CanVec”, “geo-matching LRD”, “geo-matching MARS”, “geo-matching NID”, “geo-matching ODC”, “geo-matching ODM”, “geo-matching RSB”, “geocoding (Google Maps)”, and “Wada et al. (2017)”. Refer to Table 2 in Wang et al. (2022) for abbreviations. |
qa_rank |
Quality assurance (QA) ranking (type: text). Unique values include “M1”, “M2”, “M3”, “C1”, “C2”, “C3”, “C4”, and “C5”. The QA ranking provides a general measure for our georeferencing quality. Refer to Supplementary Tables S1 and S3 in Wang et al. (2022) for more explanation. |
rv_mcm |
Reservoir storage capacity in million cubic meters (type: float). Values are only available for large dams in Wada et al. (2017). Capacity values of other WRD records are not released due to ICOLD’s proprietary restriction. Also see Table S4 in Wang et al. (2022). |
val_scn |
Validation result (type: text). Unique values include “correct”, “register”, “mismatch”, “misplacement”, and “Google Maps”. Refer to Table 4 in Wang et al. (2022) for explanation. |
val_src |
Primary validation source (type: text). Values include “CanVec”, “Google Maps”, “JDF”, “LRD”, “MARS”, “NID”, “NPCGIS”, “NRLD”, “ODC”, “ODM”, “RSB”, and “Wada et al. (2017)”. Refer to Table 2 in Wang et al. (2022) for abbreviations. |
qc |
Roles and name initials of co-authors/participants during data quality control (QC) and validation. Name initials are given to each assigned dam or region and are listed generally in chronological order for each role. Collation and harmonization of large dams in Wada et al. (2017) (see Table S4 in Wang et al. (2022)) were performed by JW, and this information is not repeated in the qc attribute for a reduced file size. Although we tried to track the name initials thoroughly, the lists may not be always exhaustive, and other undocumented adjustments and corrections were most likely performed by JW. |
v1.1 dams (file name: GeoDAR_v11_dams; format: comma-separated values (csv) and point shapefile) | |
id_v11 |
Dam ID for GeoDAR version 1.1 (type: integer). Note this is not the same as the International Code in ICOLD WRD but is linked to the International Code via encryption. |
id_v10 |
v1.0 ID of this dam/reservoir (as in id_v10) if it is also included in v1.0 (type: integer). |
id_grd_v13 |
GRanD ID of this dam if also included in GRanD v1.3 (type: integer). |
lat |
Latitude of the dam point in decimal degree (type: float) on WGS 1984. Value may be different from that in v1.0. |
lon |
Longitude of the dam point in decimal degree (type: float) on WGS 1984. Value may be different from that in v1.0. |
geo_mtd |
Same as the value of geo_mtd in v1.0 if this dam is included in v1.0. |
qa_rank |
Same as the value of qa_rank in v1.0 if this dam is included in v1.0. |
val_scn |
Same as the value of val_scn in v1.0 if this dam is included in v1.0. |
val_src |
Same as the value of val_src in v1.0 if this dam is included in v1.0. |
rv_mcm_v10 |
Same as the value of rv_mcm in v1.0 if this dam is included in v1.0. |
rv_mcm_v11 |
Reservoir storage capacity in million cubic meters (type: float). Due to ICOLD’s proprietary restriction, provided values are limited to dams in Wada et al. (2017) and GRanD v1.3. If a dam is in both Wada et al. (2017) and GRanD v1.3, the value from the latter (if valid) takes precedence. |
har_src |
Source(s) to harmonize the dam points. Unique values include “GeoDAR v1.0 alone”, “GRanD v1.3 and GeoDAR 1.0”, “GRanD v1.3 and other ICOLD”, and “GRanD v1.3 alone”. Refer to Table 1 in Wang et al. (2022) for more details. |
pnt_src |
Source(s) of the dam point spatial coordinates. Unique values include “GeoDAR v1.0”, “original |
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Google Maps Platform (GMP) consulting services market is experiencing robust growth, driven by the increasing adoption of location-based services across various sectors. The market, estimated at $2 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033, reaching approximately $6 billion by 2033. This expansion is fueled by several key factors. Firstly, the rising demand for location intelligence and data-driven decision-making across large enterprises and SMEs is pushing companies to leverage GMP's capabilities. Secondly, the shift towards online services, facilitated by the increasing accessibility and affordability of high-speed internet, is bolstering the adoption of GMP consulting services for efficient mapping, navigation, and location-based marketing. Furthermore, advancements in augmented reality (AR) and virtual reality (VR) technologies integrated with GMP are creating new avenues for innovative applications, driving market growth. However, factors like the high cost of implementation and the need for specialized expertise can restrain market expansion. The market is segmented by application (large enterprises and SMEs) and service type (online and offline), with large enterprises currently dominating due to their greater resources and need for complex location-based solutions. Geographically, North America and Europe currently hold significant market shares, but the Asia-Pacific region is anticipated to exhibit the fastest growth rate due to rapid digitalization and increasing smartphone penetration. The competitive landscape is fragmented, with a mix of global consulting giants like Deloitte, Accenture, and WPP, alongside specialized GMP consulting firms such as MapsPeople and Applied Geographics. These companies are engaged in fierce competition, offering a range of services including integration, customization, application development, and ongoing support. The success of these firms is contingent on their ability to provide tailored solutions that cater to the unique needs of diverse industries and clients, and to continuously adapt to the ever-evolving features and functionalities of the GMP. A critical factor for future growth will be the ability to integrate GMP with other platforms and technologies to create holistic and effective solutions for clients, generating a compelling return on investment. This necessitates significant investment in R&D and upskilling of the workforce.
Data Driven Detroit created the data by selecting locations from NETS and ESRI business data with proper NAICS codes, then adding and deleting though local knowledge and confirmation with Google Streetview. These locations are Grocery stores which primarily sell food and don't include convenience stores. Visual confirmation cues included the existence of the word "grocery" in the name, or the presence of shopping carts.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The custom digital map service market is experiencing robust growth, driven by the increasing demand for location-based services across diverse sectors. The market, estimated at $8 billion in 2025, is projected to expand significantly over the forecast period (2025-2033), fueled by a compound annual growth rate (CAGR) of approximately 15%. This expansion is largely attributed to several key drivers. Firstly, the automotive industry's reliance on advanced navigation and driver-assistance systems is a major catalyst. Secondly, the burgeoning location services sector, encompassing ride-sharing, delivery services, and location-based advertising, fuels considerable demand for customized map solutions. Further propelling growth is the rise of business analytics, where customized maps provide invaluable insights into spatial data, optimizing logistics, resource management, and market analysis. Finally, the ongoing development of real-time map data technologies, offering dynamic updates and high accuracy, significantly enhances the value proposition of these services. While data security and privacy concerns pose some challenges, the overall market outlook remains positive. The market segmentation reveals a strong emphasis on custom map solutions, reflecting a growing need for tailored cartographic representations catering to specific business requirements. Real-time map data is another key segment, capitalizing on the demand for dynamic and up-to-date location information. Geographic distribution shows North America and Europe as leading markets, with significant growth potential in the Asia-Pacific region, particularly in rapidly developing economies like China and India. Key players in the market, including Google, TomTom, Mapbox, and others, are actively investing in research and development, pushing technological boundaries and expanding their service portfolios to maintain a competitive edge. The ongoing evolution of map technologies, including improvements in data accuracy, integration with AI/ML, and expanding functionalities like 3D mapping and augmented reality overlays, will further shape the market landscape in the coming years.
http://reference.data.gov.uk/id/open-government-licencehttp://reference.data.gov.uk/id/open-government-licence
http://www.openstreetmap.org/images/osm_logo.png" alt=""/> OpenStreetMap (openstreetmap.org) is a global collaborative mapping project, which offers maps and map data released with an open license, encouraging free re-use and re-distribution. The data is created by a large community of volunteers who use a variety of simple on-the-ground surveying techniques, and wiki-syle editing tools to collaborate as they create the maps, in a process which is open to everyone. The project originated in London, and an active community of mappers and developers are based here. Mapping work in London is ongoing (and you can help!) but the coverage is already good enough for many uses.
Browse the map of London on OpenStreetMap.org
The whole of England updated daily:
For more details of downloads available from OpenStreetMap, including downloading the whole planet, see 'planet.osm' on the wiki.
Download small areas of the map by bounding-box. For example this URL requests the data around Trafalgar Square:
http://api.openstreetmap.org/api/0.6/map?bbox=-0.13062,51.5065,-0.12557,51.50969
Data filtered by "tag". For example this URL returns all elements in London tagged shop=supermarket:
http://www.informationfreeway.org/api/0.6/*[shop=supermarket][bbox=-0.48,51.30,0.21,51.70]
The format of the data is a raw XML represention of all the elements making up the map. OpenStreetMap is composed of interconnected "nodes" and "ways" (and sometimes "relations") each with a set of name=value pairs called "tags". These classify and describe properties of the elements, and ultimately influence how they get drawn on the map. To understand more about tags, and different ways of working with this data format refer to the following pages on the OpenStreetMap wiki.
Rather than working with raw map data, you may prefer to embed maps from OpenStreetMap on your website with a simple bit of javascript. You can also present overlays of other data, in a manner very similar to working with google maps. In fact you can even use the google maps API to do this. See OSM on your own website for details and links to various javascript map libraries.
The OpenStreetMap project aims to attract large numbers of contributors who all chip in a little bit to help build the map. Although the map editing tools take a little while to learn, they are designed to be as simple as possible, so that everyone can get involved. This project offers an exciting means of allowing local London communities to take ownership of their part of the map.
Read about how to Get Involved and see the London page for details of OpenStreetMap community events.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Journey Planner) APIs offer data regarding routing, geocoding, map data and vehicle locations through several APIs.
The use of the Digitransit production APIs requires registration and use of API keys. More information and instructions for obtaining the API key.
Further information: digitransit.fi
The public transport network data is updated daily from HSL's public transport register to the journey planner APIs. The OpenStreetMap data is also updated daily to the journey planner for routing graph, geocoding and background map purposes. Due to different cache-settings, the changes in the background map become visible to the users in about one week's time.
Data from HSL's public transport register is derived into GTFS-format every day. This data collection includes all routes that have a valid timetable. The collection includes data three months onwards from the moment the collection was generated.
Information about cancelled services is available the Journey Planner APIs mentioned above. For example, routing API's disruption-info & realtime API's service-alerts.
RxNorm was created by the U.S. National Library of Medicine (NLM) to provide a normalized naming system for clinical drugs, defined as the combination of {ingredient + strength + dose form}. In addition to the naming system, the RxNorm dataset also provides structured information such as brand names, ingredients, drug classes, and so on, for each clinical drug. Typical uses of RxNorm include navigating between names and codes among different drug vocabularies and using information in RxNorm to assist with health information exchange/medication reconciliation, e-prescribing, drug analytics, formulary development, and other functions. This public dataset includes multiple data files originally released in RxNorm Rich Release Format (RXNRRF) that are loaded into Bigquery tables. The data is updated and archived on a monthly basis. Even though the RxNorm data provides researchers and informaticists the ability to get structured information such as brand names, ingredients, and so on from the data, the mapping (of more than 200 paths ) between RxNorm entities can be challenging. National Library of Medicine provides RxNav , a web interface and a series of APIs to assist researchers. To map entities, you follow these default paths . The healthcare team at Google has replicated the mapping between RxNorm entities to assist researchers interested in doing full table joins using RxNorm. This dataset includes a single, customized table, rxn_all_pathways, that has undergone significant pre-processing with the goal of replicating all of the RxNav pathways.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Dear Scientist!This database contains data collected due to conducting study: "Analysis of the route safety of abnormal vehicle from the perspective of traffic parameters and infrastructure characteristics with the use of web technologies and machine learning" funded by National Science Centre Poland (Grant reference 2021/05/X/ST8/01669). The structure of files is arising from the aims of the study and numerous of sources needed to tailor suitable data possible to use as an input layer for neural network. You can find a following folders and files:1. Road_Parameters_Data (.csv) - which is data colleced by author before the study (2021). Here you can find information about technical quality and types of main roads located in Mazovia province (Poland). The source of data was Polish General Directorate for National Roads and Motorways. 2. Google_Maps_Data (.json) - here you can find the data, which was collected using the authors’ webservice created using the Python language, which downloaded the said data in the Distance Matrix API service on Google Maps at two-hour intervals from 25 May 2022 to 22 June 2022. The application retrieved the TRAFFIC FACTOR parameter, which was a ratio of actual time of travel divided by historical time of travel for particular roads.3. Geocoding_Roads_Data (.json) - in this folder you can find data gained from reverse geocoding approach based on geographical coordinates and the request parameter latlng were employed. As a result, Google Maps returned a response containing the postal code for the field types defined as postal_code and the name of the lowest possible level of the territorial unit for the field administrative_area_level. 4. Population_Density_Data (.csv) - here you can find date for territorial units, which were assigned to individual records were used to search the database of the Polish Postal Service using the authors' original web service written in the Python programming language. The records which contained a postal code were assigned the name of the municipality which corresponded to it. Finally, postal codes and names of territorial units were compared with the database of the Statistics Poland (GUS) containing information on population density for individual municipalities and assigned to existing records from the database.5. Roads_Incidents_Data (.json) - in this folder you can find a data collected by a webservice, which was programmed in the Python language and used for analysing the reported obstructions available on the website of the General Directorate for National Roads and Motorways. In the event of traffic obstruction emergence in the Mazovia Province, the application, on the basis of the number and kilometre of the road on which it occurred, could associate it later with appropriate records based on the links parameters. The data was colleced from 26 May to 22 June 2022.6. Weather_For_Roads_Data (.json) - here you can find the data concerning the weather conditions on the roads occurring at days of the study. To make this feasible, a webservice was programmed in the Python language, by means of which the selected items from the response returned by the www.timeanddate.com server for the corresponding input parameters were retrieved – geographical coordinates of the midpoint between the nodes of the particular roads. The data was colleced for day between 27 May and 22 June 2022.7. data_v_1 (.csv) - collected only data for road parameters8. data_v_2 (.csv) - collected data for road parameters + population density9. data_v_3 (.json) - collected data for road parameters + population density + traffic10. data_v_4 (.json) - collected data for road parameters + population density + traffic + weather + road incidents11. data_v_5 (.csv) - collected VALIDATED and cleaned data for road parameters + population density + traffic + weather + road incidents. At this stage, the road sections for which the parameter traffic factor was assessed to have been estimated incorrectly were eliminated. These were combinations for which the value of the traffic factor remained the same regardless the time of day or which took several of the same values during the course of the whole study. Moreover, it was also assumed that the final database should consist of road sections for traffic factor less than 1.2 constitute at least 10% of all results. Thus, the sections with no tendency to become congested and characterized by a small number of road traffic users were eliminated.Good luck with your research!Igor Betkier, PhD
This dataset provides comprehensive access to local and online events data through Google Events in real-time. It enables searching for various types of events including concerts, sports matches, workshops, festivals, movies, and more. Users can access detailed event information including locations, dates, ticket information, and venue details. Perfect for event discovery applications, ticket platforms, and local activity recommendations. The dataset is delivered in a JSON format via REST API.
https://brightdata.com/licensehttps://brightdata.com/license
The Google Maps dataset is ideal for getting extensive information on businesses anywhere in the world. Easily filter by location, business type, and other factors to get the exact data you need. The Google Maps dataset includes all major data points: timestamp, name, category, address, description, open website, phone number, open_hours, open_hours_updated, reviews_count, rating, main_image, reviews, url, lat, lon, place_id, country, and more.