https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
SDNist (v1.3) is a set of benchmark data and metrics for the evaluation of synthetic data generators on structured tabular data. This version (1.3) reproduces the challenge environment from Sprints 2 and 3 of the Temporal Map Challenge. These benchmarks are distributed as a simple open-source python package to allow standardized and reproducible comparison of synthetic generator models on real world data and use cases. These data and metrics were developed for and vetted through the NIST PSCR Differential Privacy Temporal Map Challenge, where the evaluation tools, k-marginal and Higher Order Conjunction, proved effective in distinguishing competing models in the competition environment.
SDNist is available via pip
install: pip install sdnist==1.2.8
for Python >=3.6 or on the USNIST/Github.
The sdnist Python module will download data from NIST as necessary, and users are not required to download data manually.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A global analysis of accessibility to high-density urban centres at a resolution of 1×1 kilometre for 2015, as measured by travel time.To model the time required for individuals to reach their most accessible city, we first quantified the speed at which humans move through the landscape. The principle underlying this work was that all areas on Earth, represented as pixels within a 2D grid, had a cost (that is, time) associated with moving through them that we quantified as a movement speed within a cost or ‘friction’ surface. We then applied a least-cost-path algorithm to the friction surface in relation to a set of high-density urban points. The algorithm calculated pixel-level travel times for the optimal path between each pixel and its nearest city (that is, with the shortest journey time). From this work we ultimately produced two products: (a) an accessibility map showing travel time to urban centres, as cities are proxies for access to many goods and services that affect human wellbeing; and (b) a friction surface that underpins the accessibility map and enables the creation of custom accessibility maps from other point datasets of interest. The map products are in GeoTIFF format in EPSG:4326 (WGS84) project with a spatial resolution of 30 arcsecs. The accessibility map pixel values represent travel time in minutes. The friction surface map pixels represent the time, in minutes required to travel one metre. This DANS data record contains these two map products. Issued: 2018-01-10
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the transcriptions of the needs assessment interviews conducted with six young event-goers, as well as the transcriptions of the expert-based think-aloud user testing of the prototype developed for the study.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5
If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD
The following text is a summary of the information in the above Data Descriptor.
The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.
The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.
These maps represent a unique global representation of physical access to essential services offered by cities and ports.
The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).
travel_time_to_ports_x (x ranges from 1 to 5)
The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.
Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes
Data type Byte (16 bit Unsigned Integer)
No data value 65535
Flags None
Spatial resolution 30 arc seconds
Spatial extent
Upper left -180, 85
Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)
Temporal resolution 2015
Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.
Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.
The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.
Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points
The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).
Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.
Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.
This process and results are included in the validation zip file.
Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.
The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.
The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.
The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.
Landsat 8's Operational Land Imager collects new imagery for a given location every 16 days. This band combination maximizes light penetration into clear water and approximates a natural looking image over land. Bands 1 and 2 can penetrate clear, sunlit water to about 30 meters and can identify features in shallow water, depending on the type and color of the features, and the water depth. It ca be used to quantify suspended sediments in water, map sediment transport paths, and aid dredging programs. This map is updated on a daily basis, retaining the 4 most recent scenes for each path/row that have cloud coverage < 50%, plus the scene closest to the corresponding GLS 2000 scene. Over time the older or cloudier scenes will be removed from the service. Each scene has attributes such as the acquisition date and estimated cloud cover percentage, which can be seen by clicking on the image. By default the map shows the most recent scenes, but by enabling time animation on the imagery layer, it is possible to restrict the displayed scenes to specific date range. Filters can be set to restrict and order the scenes based on other attributes as well.At scales smaller than 1:1 Million, overviews with 300m resolution are shown. To work with an individual scene at all scales use the lock raster functionality - (Set display order to a list of images Web Maps). Note that ‘Lock Raster’ should not be used on the service except for short periods of time, since each day a new service is created the Object IDs will change.Band Combination: Red (4), Green (3), Coastal (1) into RGBImportant Note: This web map shows imagery from the Landsat 8 Views image service, which is a free service and doesn't need any subscription. Similar services exist for returning PanSharpened, Panchromatic, and Analytic (full bit depth) imagery. Landsat data can also be accessed at https://landsatlook.usgs.gov/For more information on Landsat 8 imagery, see https://landsat.usgs.gov/landsat8.php.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset is built for time-series Sentinel-2 cloud detection and stored in Tensorflow TFRecord (refer to https://www.tensorflow.org/tutorials/load_data/tfrecord).
Each file is compressed in 7z format and can be decompressed using Bandzip or 7-zip software.
Dataset Structure:
Each filename can be split into three parts using underscores. The first part indicates whether it is designated for training or validation ('train' or 'val'); the second part indicates the Sentinel-2 tile name, and the last part indicates the number of samples in this file.
For each sample, it includes:
Here is a demonstration function for parsing the TFRecord file:
import tensorflow as tf
# init Tensorflow Dataset from file name
def parseRecordDirect(fname):
sep = '/'
parts = tf.strings.split(fname,sep)
tn = tf.strings.split(parts[-1],sep='_')[-2]
nn = tf.strings.to_number(tf.strings.split(parts[-1],sep='_')[-1],tf.dtypes.int64)
t = tf.data.Dataset.from_tensors(tn).repeat().take(nn)
t1 = tf.data.TFRecordDataset(fname)
ds = tf.data.Dataset.zip((t, t1))
return ds
keys_to_features_direct = {
'localid': tf.io.FixedLenFeature([], tf.int64, -1),
'image_raw_ldseries': tf.io.FixedLenFeature((), tf.string, ''),
'labels': tf.io.FixedLenFeature((), tf.string, ''),
'dates': tf.io.FixedLenFeature((), tf.string, ''),
'weights': tf.io.FixedLenFeature((), tf.string, '')
}
# The Decoder (Optional)
class SeriesClassificationDirectDecorder(decoder.Decoder):
"""A tf.Example decoder for tfds classification datasets."""
def _init_(self) -> None:
super()._init_()
def decode(self, tid, ds):
parsed = tf.io.parse_single_example(ds, keys_to_features_direct)
encoded = parsed['image_raw_ldseries']
labels_encoded = parsed['labels']
decoded = tf.io.decode_raw(encoded, tf.uint16)
label = tf.io.decode_raw(labels_encoded, tf.int8)
dates = tf.io.decode_raw(parsed['dates'], tf.int64)
weight = tf.io.decode_raw(parsed['weights'], tf.float32)
decoded = tf.reshape(decoded,[-1,4,42,42])
sample_dict = {
'tid': tid, # tile ID
'dates': dates, # Date list
'localid': parsed['localid'], # sample ID
'imgs': decoded, # image array
'labels': label, # label list
'weights': weight
}
return sample_dict
# simple function
def preprocessDirect(tid, record):
parsed = tf.io.parse_single_example(record, keys_to_features_direct)
encoded = parsed['image_raw_ldseries']
labels_encoded = parsed['labels']
decoded = tf.io.decode_raw(encoded, tf.uint16)
label = tf.io.decode_raw(labels_encoded, tf.int8)
dates = tf.io.decode_raw(parsed['dates'], tf.int64)
weight = tf.io.decode_raw(parsed['weights'], tf.float32)
decoded = tf.reshape(decoded,[-1,4,42,42])
return tid, dates, parsed['localid'], decoded, label, weight
t1 = parseRecordDirect('filename here')
dataset = t1.map(preprocessDirect, num_parallel_calls=tf.data.experimental.AUTOTUNE)
#
Class Definition:
Dataset Construction:
First, we randomly generate 500 points for each tile, and all these points are aligned to the pixel grid center of the subdatasets in 60m resolution (eg. B10) for consistence when comparing with other products.
It is because that other cloud detection method may use the cirrus band as features, which is in 60m resolution.
Then, the time series image patches of two shapes are cropped with each point as the center.
The patches of shape \(42 \times 42\) are cropped from the bands in 10m resolution (B2, B3, B4, B8) and are used to construct this dataset.
And the patches of shape \(348 \times 348\) are cropped from the True Colour Image (TCI, details see sentinel-2 user guide) file and are used to interpreting class labels.
The samples with a large number of timestamps could be time-consuming in the IO stage, thus the time series patches are divided into different groups with timestamps not exceeding 100 for every group.
This study investigated the impact of graphical user interface (GUI) design on the efficiency and effectiveness of map-based tasks on mobile devices, using time-based weather data as a case study. Three different GUI designs (button-type, circle-type, and sidebar) were tested in a between-subjects design, with 50 participants completing a set of map-based tasks on each GUI design. The results showed that GUI design significantly affected the effectiveness of map-based tasks. Participants performed better at tasks involving the search for the highest and lowest temperature amplitudes on the button-type GUI whereas the circle-type GUI showed lower effectiveness for tasks involving the search for day temperatures. Analysis of the visual attention distribution based on fixation count revealed that different GUI designs led to different patterns of visual attention. The study highlights the importance of considering GUI design in the development of mobile map applications, particularly for map-based tasks involving time-based data. The study shows that separating the date from the time navigation panel reduces the necessary visual focus on the GUI itself and is a valuable insight for future GUI design.
Six figures, including a map of study sites, data distributions, temporal, and annual results.
This dataset contains the “GapMap Frontal to Temporal I" in the individual, single subject template of the MNI Colin 27 as well as the MNI ICBM 152 2009c nonlinear asymmetric reference space. In order to provide whole-brain coverage for the cortex within the Julich-Brain Atlas, yet uncharted parts of the frontal cortex have been combined to the brain region “GapMap Frontal to Temporal I”. The distributions were modeled so that probabilistic gap maps were computed in analogy to other maps of the Julich-Brain Atlas. The probabilistic map of “GapMap Frontal to Temporal I” is provided in NifTi format for each hemisphere in the reference space. The Julich-Brain atlas relies on a modular, flexible and adaptive framework containing workflows to create the probabilistic brain maps for these structures. New maps are continuously replacing parts of “GapMap Frontal to Temporal I” with progress in mapping.
The National Forest Climate Change Maps project was developed by the Rocky Mountain Research Station (RMRS) and the Office of Sustainability and Climate to meet the needs of national forest managers for information on projected climate changes at a scale relevant to decision making processes, including forest plans. The maps use state-of-the-art science and are available for every national forest in the contiguous United States with relevant data coverage. Currently, the map sets include variables related to precipitation, air temperature, snow (including snow residence time and April 1 snow water equivalent), and stream flow.Snow residence time (in days) and April 1 snow water equivalent (in mm) were modeled using the spatial analog models of Luce et al., 2014 (https://res1agupubsd-o-tonlinelibraryd-o-twileyd-o-tcom.vcapture.xyz/doi/full/10.1002/2013WR014844); see also Lute and Luce, 2017 (https://res1agupubsd-o-tonlinelibraryd-o-twileyd-o-tcom.vcapture.xyz/doi/full/10.1002/2017WR020752). These models are built on precipitation and snow data from Snowpack Telemetry (SNOTEL) stations across the western United States and temperature data from the TopoWx dataset (https://res1rmetsd-o-tonlinelibraryd-o-twileyd-o-tcom.vcapture.xyz/doi/10.1002/joc.4127). They were calculated for the historical (1975-2005) and future (2071-2090) time periods, along with absolute and percent change.Raster data are also available for download from RMRS site (https://res1wwwd-o-tfsd-o-tusdad-o-tgov.vcapture.xyz/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/categories/us-raster-layers.html), along with pdf maps and detailed metadata (https://res1wwwd-o-tfsd-o-tusdad-o-tgov.vcapture.xyz/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/downloads/NationalForestClimateChangeMapsMetadata.pdf).
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The digital map market, currently valued at $25.55 billion in 2025, is experiencing robust growth, projected to expand at a Compound Annual Growth Rate (CAGR) of 13.39% from 2025 to 2033. This expansion is fueled by several key drivers. The increasing adoption of location-based services (LBS) across diverse sectors like automotive, logistics, and smart city initiatives is a primary catalyst. Furthermore, advancements in technologies such as AI, machine learning, and high-resolution satellite imagery are enabling the creation of more accurate, detailed, and feature-rich digital maps. The shift towards cloud-based deployment models offers scalability and cost-effectiveness, further accelerating market growth. While data privacy concerns and the high initial investment costs for sophisticated mapping technologies present some challenges, the overall market outlook remains overwhelmingly positive. The competitive landscape is dynamic, with established players like Google, TomTom, and ESRI vying for market share alongside innovative startups offering specialized solutions. The segmentation of the market by solution (software and services), deployment (on-premise and cloud), and industry reveals significant opportunities for growth in sectors like automotive navigation, autonomous vehicle development, and precision agriculture, where real-time, accurate mapping data is crucial. The Asia-Pacific region, driven by rapid urbanization and technological advancements in countries like China and India, is expected to witness particularly strong growth. The market's future hinges on continuous innovation. We anticipate a rise in the demand for 3D maps, real-time updates, and integration with other technologies like the Internet of Things (IoT) and augmented reality (AR). Companies are focusing on enhancing the accuracy and detail of their maps, incorporating real-time traffic data, and developing tailored solutions for specific industry needs. The increasing adoption of 5G technology promises to further boost the market by enabling faster data transmission and real-time updates crucial for applications like autonomous driving and drone delivery. The development of high-precision mapping solutions catering to specialized sectors like infrastructure management and disaster response will also fuel future growth. Ultimately, the digital map market is poised for continued expansion, driven by technological advancements and increased reliance on location-based services across a wide spectrum of industries. Recent developments include: December 2022 - The Linux Foundation has partnered with some of the biggest technology companies in the world to build interoperable and open map data in what is an apparent move t. The Overture Maps Foundation, as the new effort is called, is officially hosted by the Linux Foundation. The ultimate aim of the Overture Maps Foundation is to power new map products through openly available datasets that can be used and reused across applications and businesses, with each member throwing their data and resources into the mix., July 27, 2022 - Google declared the launch of its Street View experience in India in collaboration with Genesys International, an advanced mapping solutions company, and Tech Mahindra, a provider of digital transformation, consulting, and business re-engineering solutions and services. Google, Tech Mahindra, and Genesys International also plan to extend this to more than around 50 cities by the end of the year 2022.. Key drivers for this market are: Growth in Application for Advanced Navigation System in Automotive Industry, Surge in Demand for Geographic Information System (GIS); Increased Adoption of Connected Devices and Internet. Potential restraints include: Complexity in Integration of Traditional Maps with Modern GIS System. Notable trends are: Surge in Demand for GIS and GNSS to Influence the Adoption of Digital Map Technology.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A collection of 10 brain maps. Each brain map is a 3D array of values representing properties of the brain at different locations.
SPM maps of the paper "Temporal chunking as a mechanism for unsupervised learning of task-sets", Bouchacourt, Palminteri, Koechlin, Ostojic, published in eLIFE, 2020.
This map contains a number of world-wide dynamic image services providing access to various Landsat scenes covering the landmass of the World for visual interpretation. Landsat 8 collects new scenes for each location on Earth every 16 days, assuming limited cloud coverage. Newest and near cloud-free scenes are displayed by default on top. Most scenes collected since 1st January 2015 are included. The service also includes scenes from the Global Land Survey* (circa 2010, 2005, 2000, 1990, 1975).The service contains a range of different predefined renderers for Multispectral, Panchromatic as well as Pansharpened scenes. The layers in the service can be time-enabled so that the applications can restrict the displayed scenes to a specific date range. This ArcGIS Server dynamic service can be used in Web Maps and ArcGIS Desktop, Web and Mobile applications using the REST based image services API. Users can also export images, but the exported area is limited to maximum of 2,000 columns x 2,000 rows per request.Data Source: The imagery in these services is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). The data for these services reside on the Landsat Public Datasets hosted on the Amazon Web Service cloud. Users can access full scenes from https://github.com/landsat-pds/landsat_ingestor/wiki/Accessing-Landsat-on-AWS, or alternatively access http://landsatlook.usgs.gov to review and download full scenes from the complete USGS archive.For more information on Landsat 8 images, see http://landsat.usgs.gov/landsat8.php.*The Global Land Survey includes images from Landsat 1 through Landsat 7. Band numbers and band combinations differ from those of Landsat 8, but have been mapped to the most appropriate band as in the above table. For more information about the Global Land Survey, visit http://landsat.usgs.gov/science_GLS.php.For more information on each of the individual layers, see http://www.arcgis.com/home/item.html?id=d9b466d6a9e647ce8d1dd5fe12eb434b ; http://www.arcgis.com/home/item.html?id=6b003010cbe64d5d8fd3ce00332593bf ; http://www.arcgis.com/home/item.html?id=a7412d0c33be4de698ad981c8ba471e6
Accessibility is defined as the travel time to a location of interest using land (road/off road) or water (navigable river, lake and ocean) based travel. This accessibility is computed using a cost-distance algorithm which computes the “cost” of traveling between two locations on a regular raster grid. Generally this cost is measured in units of time.The input GIS data and a description of the underlying model that were developed by Andrew Nelson in the GEM (Global Environment Monitoring) unit in collaboration with the World Bank’s Development Research Group between October 2007 and May 2008. The pixel values representing minutes of travel time. Available dataset: Joint Research Centre - Land Resource Management Unit
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global real-time maps market is projected to experience substantial growth, with a CAGR of approximately XX% during the forecast period of 2025-2033. Valued at XXX million in 2025, the market is anticipated to surpass XXX million by 2033. This growth can be attributed to the increasing adoption of location-based services and the rise of autonomous vehicles. Additionally, the growing demand for personalized navigation and traffic management solutions is expected to fuel market expansion. Key market drivers include the increasing penetration of smartphones and the adoption of cloud-based mapping services. Furthermore, the integration of real-time traffic data and the use of advanced technologies such as artificial intelligence (AI) and machine learning (ML) are expected to enhance the accuracy and efficiency of real-time maps, thus driving market growth. Despite these positive factors, the market may face challenges related to data privacy and security concerns. Nevertheless, strategic partnerships and collaborations among market participants are anticipated to create new opportunities for innovation and growth in the real-time maps market.
This map contains a number of world-wide dynamic image services providing access to various Landsat scenes covering the landmass of the World for visual interpretation. Landsat 8 collects new scenes for each location on Earth every 16 days, assuming limited cloud coverage. Newest and near cloud-free scenes are displayed by default on top. Most scenes collected since 1st January 2015 are included. The service also includes scenes from the Global Land Survey* (circa 2010, 2005, 2000, 1990, 1975).The service contains a range of different predefined renderers for Multispectral, Panchromatic as well as Pansharpened scenes. The layers in the service can be time-enabled so that the applications can restrict the displayed scenes to a specific date range. This ArcGIS Server dynamic service can be used in Web Maps and ArcGIS Desktop, Web and Mobile applications using the REST based image services API. Users can also export images, but the exported area is limited to maximum of 2,000 columns x 2,000 rows per request.Data Source: The imagery in these services is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). The data for these services reside on the Landsat Public Datasets hosted on the Amazon Web Service cloud. Users can access full scenes from https://github.com/landsat-pds/landsat_ingestor/wiki/Accessing-Landsat-on-AWS, or alternatively access http://landsatlook.usgs.gov to review and download full scenes from the complete USGS archive.For more information on Landsat 8 images, see http://landsat.usgs.gov/landsat8.php.*The Global Land Survey includes images from Landsat 1 through Landsat 7. Band numbers and band combinations differ from those of Landsat 8, but have been mapped to the most appropriate band as in the above table. For more information about the Global Land Survey, visit http://landsat.usgs.gov/science_GLS.php.For more information on each of the individual layers, see http://www.arcgis.com/home/item.html?id=d9b466d6a9e647ce8d1dd5fe12eb434b ; http://www.arcgis.com/home/item.html?id=6b003010cbe64d5d8fd3ce00332593bf ; http://www.arcgis.com/home/item.html?id=a7412d0c33be4de698ad981c8ba471e6
The travel time map was generated using the Pedestrian Evacuation Analyst model from the USGS. The travel time analysis uses ESRI's Path Distance tool to find the shortest distance across a cost surface from any point in the hazard zone to a safe zone. This cost analysis considers the direction of movement and assigns a higher cost to steeper slopes, based on a table contained within the model. The analysis also adds in the energy costs of crossing different types of land cover, assuming that less energy is expended walking along a road than walking across a sandy beach. To produce the time map, the evacuation surface output from the model is grouped into 1-minute increments for easier visualization. The times in the attribute table represent the estimated time to travel on foot to the nearest safe zone at the speed designated in the map title. The bridge or nobridge name in the map title identifies whether bridges were represented in the modeling or whether they were removed prior to modeling to estimate the impact on travel times from earthquake-damaged bridges.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global real-time maps market size was valued at approximately USD 5.3 billion in 2023 and is projected to reach USD 12.7 billion by 2032, growing at a compound annual growth rate (CAGR) of 10.2% during the forecast period. This robust growth is driven by increasing demand for accurate and instantaneous geographic data across various sectors, including transportation, logistics, automotive, and more.
One of the primary growth factors for the real-time maps market is the widespread adoption of smartphones and connected devices. The proliferation of these devices has created a surge in demand for navigation and location-based services, which rely heavily on real-time mapping technologies. Additionally, the advent of advanced technologies such as 5G and the Internet of Things (IoT) has further accelerated the need for real-time data, enhancing the accuracy and efficiency of mapping services. These technologies enable faster data transmission, thus providing users with up-to-date information in real-time.
Another significant growth factor is the increasing emphasis on smart city developments. Governments and urban planners across the globe are investing heavily in smart city projects, which require sophisticated mapping solutions for efficient traffic management, public transportation systems, and emergency services. Real-time maps are integral to these projects as they provide the necessary data for monitoring and managing urban infrastructure, thereby improving the quality of life for urban residents. Moreover, the integration of artificial intelligence (AI) and machine learning (ML) in mapping technologies is enhancing the capabilities of real-time maps, making them more predictive and adaptive.
The rise of autonomous vehicles is also a crucial driver for the real-time maps market. Autonomous and connected vehicles rely on real-time maps for navigation and timely decision-making. The automotive industry is investing significantly in mapping technologies to improve the safety and reliability of autonomous driving systems. Real-time maps provide these vehicles with up-to-date information about road conditions, traffic patterns, and potential hazards, which is essential for safe and efficient operation. This trend is expected to continue, driving further growth in the market.
Regionally, North America holds a significant share of the real-time maps market, driven by the presence of major technology companies and high adoption rates of advanced technologies. However, the Asia Pacific region is expected to witness the highest growth rate during the forecast period. The rapid urbanization, increasing investments in smart city projects, and the expanding automotive industry in countries like China and India are major contributors to this growth. Europe also presents substantial opportunities, particularly in the automotive and transportation sectors, where real-time mapping solutions are increasingly being adopted.
The real-time maps market is segmented by component into software, hardware, and services. The software segment is expected to hold the largest market share during the forecast period. This can be attributed to the increasing demand for advanced mapping software that offers enhanced features such as real-time updates, high accuracy, and predictive analytics. Companies are continuously investing in software development to improve the functionality and user experience of their mapping solutions. The integration of AI and ML into mapping software is also a significant trend, enabling more intelligent and adaptive maps.
The hardware segment, though smaller compared to software, plays a crucial role in the real-time maps market. Hardware components such as GPS devices, sensors, and other tracking devices are essential for capturing and transmitting real-time data. The growth in the IoT market and advancements in sensor technologies have significantly improved the capabilities of hardware components, making them more accurate and reliable. This has led to increased adoption of real-time mapping hardware in various applications, particularly in the automotive and transportation sectors.
Services form the third component of the real-time maps market, encompassing a range of offerings such as consulting, implementation, and maintenance services. As companies adopt real-time mapping solutions, the demand for specialized services to ensure proper integration and functionality of these solutions has increased. Service providers offer expertise in customizing and optimizing mapping
This map data product provides accurate, real-time, and historical GPS event records across Europe. Ideal for applications in mapping, spatial analytics, and movement tracking, the dataset delivers location intelligence with granular detail and high data quality.
Data Composition Each record contains: GPS coordinates (latitude, longitude) Timestamp (epoch & date) Device ID (MAID: IDFA/GAID) Country code (ISO3) Horizontal accuracy (85% fill rate) Optional: IP address, carrier, device model
Access & Delivery The dataset is available via API with polygon queries (up to 10,000 tiles), enabling targeted spatial analysis for POIs, regions, or entire cities. Data can be delivered hourly or daily in JSON, CSV, or Parquet formats, via AWS S3, Google Cloud, or direct API access. Historical backfill is available from September 2024.
Key Attributes
Real-time updates with 95% of events available within 3 days Custom schema mapping & folder structures GDPR & CCPA compliant data sourcing and opt-out processes Credit-based pricing for scalability Applications Map creation and enhancement Urban mobility mapping Retail catchment analysis Transport route optimization POI mapping and visitation analytics Geospatial risk and impact studies
We performed a detailed analysis of the in vitro differentiation of induced pluripotent stem cells to macrophages and dendritic cells.
https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
SDNist (v1.3) is a set of benchmark data and metrics for the evaluation of synthetic data generators on structured tabular data. This version (1.3) reproduces the challenge environment from Sprints 2 and 3 of the Temporal Map Challenge. These benchmarks are distributed as a simple open-source python package to allow standardized and reproducible comparison of synthetic generator models on real world data and use cases. These data and metrics were developed for and vetted through the NIST PSCR Differential Privacy Temporal Map Challenge, where the evaluation tools, k-marginal and Higher Order Conjunction, proved effective in distinguishing competing models in the competition environment.
SDNist is available via pip
install: pip install sdnist==1.2.8
for Python >=3.6 or on the USNIST/Github.
The sdnist Python module will download data from NIST as necessary, and users are not required to download data manually.