Facebook
TwitterThis map visualisation service allows access to the set of information layers published in the Spatial Data Infrastructure of Navarra and that correspond to the public data of the SITNA. The Web Map Service (WMS) defined by the OGC (Open Geospatial Consortium) produces spatially referenced data maps, dynamically based on geographic information.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a list of 186 Digital Humanities projects leveraging information visualisation techniques. Each project has been classified according to visualisation and interaction methods, narrativity and narrative solutions, domain, methods for the representation of uncertainty and interpretation, and the employment of critical and custom approaches to visually represent humanities data.
The project_id column contains unique internal identifiers assigned to each project. Meanwhile, the last_access column records the most recent date (in DD/MM/YYYY format) on which each project was reviewed based on the web address specified in the url column.
The remaining columns can be grouped into descriptive categories aimed at characterising projects according to different aspects:
Narrativity. It reports the presence of information visualisation techniques employed within narrative structures. Here, the term narrative encompasses both author-driven linear data stories and more user-directed experiences where the narrative sequence is determined by user exploration [1]. We define 2 columns to identify projects using visualisation techniques in narrative, or non-narrative sections. Both conditions can be true for projects employing visualisations in both contexts. Columns:
non_narrative (boolean)
narrative (boolean)
Domain. The humanities domain to which the project is related. We rely on [2] and the chapters of the first part of [3] to abstract a set of general domains. Column:
domain (categorical):
History and archaeology
Art and art history
Language and literature
Music and musicology
Multimedia and performing arts
Philosophy and religion
Other: both extra-list domains and cases of collections without a unique or specific thematic focus.
Visualisation of uncertainty and interpretation. Buiding upon the frameworks proposed by [4] and [5], a set of categories was identified, highlighting a distinction between precise and impressional communication of uncertainty. Precise methods explicitly represent quantifiable uncertainty such as missing, unknown, or uncertain data, precisely locating and categorising it using visual variables and positioning. Two sub-categories are interactive distinction, when uncertain data is not visually distinguishable from the rest of the data but can be dynamically isolated or included/excluded categorically through interaction techniques (usually filters); and visual distinction, when uncertainty visually âemergesâ from the representation by means of dedicated glyphs and spatial or visual cues and variables. On the other hand, impressional methods communicate the constructed and situated nature of data [6], exposing the interpretative layer of the visualisation and indicating more abstract and unquantifiable uncertainty using graphical aids or interpretative metrics. Two sub-categories are: ambiguation, when the use of graphical expedientsâlike permeable glyph boundaries or broken linesâvisually convey the ambiguity of a phenomenon; and interpretative metrics, when expressive, non-scientific, or non-punctual metrics are used to build a visualisation. Column:
uncertainty_interpretation (categorical):
Interactive distinction
Visual distinction
Ambiguation
Interpretative metrics
Critical adaptation. We identify projects in which, with regards to at least a visualisation, the following criteria are fulfilled: 1) avoid repurposing of prepackaged, generic-use, or ready-made solutions; 2) being tailored and unique to reflect the peculiarities of the phenomena at hand; 3) avoid simplifications to embrace and depict complexity, promoting time-consuming visualisation-based inquiry. Column:
critical_adaptation (boolean)
Non-temporal visualisation techniques. We adopt and partially adapt the terminology and definitions from [7]. A column is defined for each type of visualisation and accounts for its presence within a project, also including stacked layouts and more complex variations. Columns and inclusion criteria:
plot (boolean): visual representations that map data points onto a two-dimensional coordinate system.
cluster_or_set (boolean): sets or cluster-based visualisations used to unveil possible inter-object similarities.
map (boolean): geographical maps used to show spatial insights. While we do not specify the variants of maps (e.g., pin maps, dot density maps, flow maps, etc.), we make an exception for maps where each data point is represented by another visualisation (e.g., a map where each data point is a pie chart) by accounting for the presence of both in their respective columns.
network (boolean): visual representations highlighting relational aspects through nodes connected by links or edges.
hierarchical_diagram (boolean): tree-like structures such as tree diagrams, radial trees, but also dendrograms. They differ from networks for their strictly hierarchical structure and absence of closed connection loops.
treemap (boolean): still hierarchical, but highlighting quantities expressed by means of area size. It also includes circle packing variants.
word_cloud (boolean): clouds of words, where each instanceâs size is proportional to its frequency in a related context
bars (boolean): includes bar charts, histograms, and variants. It coincides with âbar chartsâ in [7] but with a more generic term to refer to all bar-based visualisations.
line_chart (boolean): the display of information as sequential data points connected by straight-line segments.
area_chart (boolean): similar to a line chart but with a filled area below the segments. It also includes density plots.
pie_chart (boolean): circular graphs divided into slices which can also use multi-level solutions.
plot_3d (boolean): plots that use a third dimension to encode an additional variable.
proportional_area (boolean): representations used to compare values through area size. Typically, using circle- or square-like shapes.
other (boolean): it includes all other types of non-temporal visualisations that do not fall into the aforementioned categories.
Temporal visualisations and encodings. In addition to non-temporal visualisations, a group of techniques to encode temporality is considered in order to enable comparisons with [7]. Columns:
timeline (boolean): the display of a list of data points or spans in chronological order. They include timelines working either with a scale or simply displaying events in sequence. As in [7], we also include structured solutions resembling Gantt chart layouts.
temporal_dimension (boolean): to report when time is mapped to any dimension of a visualisation, with the exclusion of timelines. We use the term âdimensionâ and not âaxisâ as in [7] as more appropriate for radial layouts or more complex representational choices.
animation (boolean): temporality is perceived through an animation changing the visualisation according to time flow.
visual_variable (boolean): another visual encoding strategy is used to represent any temporality-related variable (e.g., colour).
Interactions. A set of categories to assess affordable interactions based on the concept of user intent [8] and user-allowed perceptualisation data actions [9]. The following categories roughly match the manipulative subset of methods of the âhowâ an interaction is performed in the conception of [10]. Only interactions that affect the aspect of the visualisation or the visual representation of its data points, symbols, and glyphs are taken into consideration. Columns:
basic_selection (boolean): the demarcation of an element either for the duration of the interaction or more permanently until the occurrence of another selection.
advanced_selection (boolean): the demarcation involves both the selected element and connected elements within the visualisation or leads to brush and link effects across views. Basic selection is tacitly implied.
navigation (boolean): interactions that allow moving, zooming, panning, rotating, and scrolling the view but only when applied to the visualisation and not to the web page. It also includes âdrillâ interactions (to navigate through different levels or portions of data detail, often generating a new view that replaces or accompanies the original) and âexpandâ interactions generating new perspectives on data by expanding and collapsing nodes.
arrangement (boolean): the organisation of visualisation elements (symbols, glyphs, etc.) or multi-visualisation layouts spatially through drag and drop or
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Intensity variations over time in resting BOLD fMRI exhibit spatial correlation patterns consistent with a set of large scale cortical networks. However, visualizations of this data on the brain surface, even after extensive preprocessing, are dominated by local intensity fluctuations that obscure larger scale behavior. Our novel adaptation of non-local means (NLM) filtering, which we refer to as temporal NLM or tNLM, reduces these local fluctuations without the spatial blurring that occurs when using standard linear filtering methods. We show examples of tNLM filtering that allow direct visualization of spatio-temporal behavior on the cortical surface. These results reveal patterns of activity consistent with known networks as well as more complex dynamic changes within and between these networks. This ability to directly visualize brain activity may facilitate new insights into spontaneous brain dynamics. Further, temporal NLM can also be used as a preprocessor for resting fMRI for exploration of dynamic brain networks. We demonstrate its utility through application to graph-based functional cortical parcellation. Simulations with known ground truth functional regions demonstrate that tNLM filtering prior to parcellation avoids the formation of false parcels that can arise when using linear filtering. Application to resting fMRI data from the Human Connectome Project shows significant improvement, in comparison to linear filtering, in quantitative agreement with functional regions identified independently using task-based experiments as well as in test-retest reliability.
Facebook
Twitterhttps://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Italy Geospatial Analytics Market size was valued at USD 309.12 Million in 2024 and is projected to reach USD 575.78 Million by 2032, growing at a CAGR of 8.09% during the forecast period from 2026-2032.
Italy Geospatial Analytics Market: Definition/Overview
Geospatial analytics is the process of gathering, manipulating, and visualizing geographic data from many sources such as GPS, satellite imagery, IoT devices, and social media. It uses georeferenced data to identify patterns, trends, and relationships between people, locations, and events. Urban planning, disaster management, transportation optimization, environmental monitoring, retail site selection, agricultural efficiency, and telecommunications network construction are some of the applications. Maps, graphs, and models are used to demonstrate these insights and help make informed decisions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here.
Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866,
author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and
Fingscheidt, Tim},
booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)},
title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds},
year={2022},
pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
1. Dataset
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json"
with open(json_path) as jf:
data = json.load(jf)
print(data)
2. HD_Map
Map items are stored as lists of items in JSON format. In particular, we provide:
3. HD_Map_MetaData
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as âshape fileâ (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
4. HD_PointCloud_Tiles
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np
import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin"
pc_dict = {}
key_list = ['x', 'y', 'z', 'intensity', 'is_ground']
type_list = ['
5. Trajectories
We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.
These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.
- OP1 â OP5 cover the majority of the map with 5 trajectories.
- RH1 â RH10 cover the majority of the map with 10 trajectories.
Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.
Facebook
Twitter
According to our latest research, the global Mapping and Navigation APIs market size reached USD 13.8 billion in 2024, reflecting robust adoption across industries. The market is projected to expand at a CAGR of 17.1% during the forecast period, reaching USD 45.2 billion by 2033. This remarkable growth is primarily driven by increased demand for real-time location-based services, the proliferation of connected devices, and the digital transformation of key sectors such as automotive, retail, and logistics.
The upward trajectory of the Mapping and Navigation APIs market is fueled by the rapid integration of geospatial technologies into business operations. Organizations across various sectors are leveraging mapping and navigation APIs to enhance user experiences, streamline logistics, and optimize resource allocation. The surge in smartphone penetration and the evolution of 5G networks have further accelerated the adoption of real-time mapping and navigation solutions. As businesses seek to gain a competitive edge through advanced analytics and location intelligence, the demand for robust APIs continues to surge. Additionally, the increasing use of mapping APIs in ride-hailing, food delivery, and last-mile logistics is reshaping urban mobility and commerce.
Another significant growth factor is the rising importance of data-driven decision-making in enterprises. Mapping and navigation APIs enable organizations to visualize, analyze, and interpret spatial data, resulting in improved operational efficiency and customer engagement. The deployment of these APIs in sectors such as retail and e-commerce allows businesses to offer personalized location-based promotions and seamless delivery tracking. In the automotive industry, APIs are critical for powering in-car navigation systems, autonomous vehicles, and fleet management solutions. As businesses continue to invest in digital transformation, the integration of advanced mapping and navigation functionalities is becoming a strategic imperative.
Technological advancements are also playing a pivotal role in propelling the Mapping and Navigation APIs market forward. The emergence of AI-powered geospatial analytics, real-time traffic updates, and augmented reality navigation is enhancing the capabilities of mapping APIs. The development of high-definition maps and 3D visualization tools is enabling more accurate and immersive navigation experiences. Furthermore, the ongoing collaboration between API providers and industry stakeholders is fostering innovation and expanding the application scope of mapping and navigation solutions. The increasing focus on smart city initiatives and IoT integration is expected to create new growth opportunities for API vendors in the coming years.
From a regional perspective, North America currently dominates the Mapping and Navigation APIs market, accounting for the largest revenue share in 2024. The region's leadership is attributed to the presence of major API providers, high technology adoption, and substantial investments in smart infrastructure. Asia Pacific is emerging as the fastest-growing region, driven by rapid urbanization, expanding digital economies, and government initiatives to modernize transportation and logistics. Europe also holds a significant market share, supported by strong demand in automotive and logistics sectors. The Middle East & Africa and Latin America are witnessing steady growth, underpinned by increasing digitalization and investments in smart city projects.
In the realm of digital transformation, Work Zone Mapping APIs are emerging as a crucial tool for enhancing safety and efficiency in construction and road maintenance projects. These APIs provide real-time data on work zone locations, status, and traffic conditions, enabling better planning and coordination among stakeholders. By integrating Work Zone Mapping APIs, transportation agencies and construction companies can improve traffic management, reduce congestion, and enhance worker safety. The ability to access up-to-date information on work zones allows for dynamic rerouting and timely communication with drivers, minimizing disruptions and improving overall road safety. As urban areas continue to expand and infrastructure projects become more complex, the demand for Work Zone Mapping APIs is expected to grow, offerin
Facebook
Twitter
According to our latest research, the global 3D Mapping Robots for Interiors market size reached USD 1.19 billion in 2024, and is set to expand at a robust CAGR of 17.8% from 2025 to 2033. By the end of 2033, the market is forecasted to attain a value of USD 5.13 billion. This remarkable growth is primarily driven by the increasing adoption of automation and digitalization across architecture, construction, and facility management sectors, where high-precision interior mapping has become indispensable for optimizing workflows and enhancing spatial analytics.
One of the most significant growth factors for the 3D Mapping Robots for Interiors market is the rising demand for efficient and accurate indoor mapping solutions in the commercial and industrial sectors. Businesses are increasingly recognizing the value of high-resolution, three-dimensional spatial data for applications such as facility management, real estate visualization, and interior renovations. The integration of advanced technologies like LiDAR and SLAM (Simultaneous Localization and Mapping) has dramatically improved the precision and speed of interior mapping, reducing human error and manual labor. As a result, organizations are able to streamline their operations, improve asset utilization, and make data-driven decisions, fueling further market expansion.
Another pivotal driver is the rapid advancement in robotics and artificial intelligence, which has enabled the development of autonomous and semi-autonomous 3D mapping robots. These robots are capable of navigating complex indoor environments with minimal human intervention, providing real-time spatial data and high-definition maps. The proliferation of smart buildings and the increasing need for digital twins in urban development have further accelerated the adoption of 3D mapping robots. Additionally, the growing trend of remote monitoring and predictive maintenance in facility management is pushing organizations to invest in these innovative solutions, thereby contributing to the marketâs sustained growth.
Consumer expectations in the residential and real estate sectors are also reshaping the 3D Mapping Robots for Interiors market. Homeowners, property developers, and real estate agents are leveraging 3D mapping robots to create immersive virtual tours and precise floor plans, enhancing the buying and selling experience. The increasing popularity of online property listings and virtual walkthroughs is driving demand for high-quality interior maps, which only advanced 3D mapping robots can deliver. This shift is opening up new revenue streams for manufacturers and service providers, as the technology becomes more accessible and affordable for smaller enterprises and individual users.
From a regional perspective, North America currently dominates the 3D Mapping Robots for Interiors market, owing to early adoption of advanced robotics, a strong presence of key industry players, and significant investments in smart infrastructure projects. Europe follows closely, driven by stringent regulations on building safety and energy efficiency, which necessitate accurate interior mapping. Meanwhile, the Asia Pacific region is expected to witness the fastest CAGR of 21.3% through 2033, fueled by rapid urbanization, large-scale construction activities, and increasing technological adoption in countries like China, Japan, and South Korea. The Middle East & Africa and Latin America are also showing promising growth, albeit from a smaller base, as digital transformation initiatives gain momentum in these regions.
The Product Type segment of the 3D Mapping Robots for Interiors market is categorized into Autonomous 3D Mapping Robots, Semi-Autonomous 3D Mapping Robots, and Manual 3D Mapping Robots. Among these, Autonomous 3D Mapping Robots have emerged as the leading segment, accounting for over 47% of the market share in 2024. Their ability to operate independently, navigate co
Facebook
Twitterhttps://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Israel Geospatial Analytics Market size was valued at USD 1.89 Billion in 2024 and is projected to reach USD 2.89 Billion by 2032, growing at a CAGR of 5.47% during the forecast period from 2026-2032.
Israel Geospatial Analytics Market: Definition/Overview
Geospatial analytics is the process of gathering, manipulating, and visualizing geographic data from many sources such as GPS, satellite imagery, IoT devices, and social media. It uses georeferenced data to identify patterns, trends, and connections between people, locations, and events. Urban planning, disaster management, transportation optimization, environmental monitoring, retail site selection, agricultural efficiency, and telecommunications network construction are some of the applications.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The MEANS-ST1.0 dataset consists of a "Data File" and a "Readme File". The "Data File" serve as the core file, while the "Readme File" provides explanations of abbreviations and units, along with a list of key parameters. Within the data files, we offer three different formats of anthropogenic pollutant discharge datasets. The first format is stored as GeoTIFF files, which can be used in conjunction with GIS software for overall characterization and spatial distribution analysis. The spatial resolution is 1 km, covering three representative years (1980, 2000, and 2020) and providing data on total anthropogenic nitrogen discharges, as well as discharges from five types of anthropogenic pollutant sources: urban residential, rural residential, industry, crop farming and livestock farming. The second format comprises ten NetCDF files, suitable for constructing two-dimensional or multi-dimensional models and conducting data visualization analysis. These files have a spatial resolution of 1 km and contain monthly data for different years (1980, 2000, and 2020) on total TN and TP discharges and five types of anthropogenic pollutant sources. The third format of the dataset is Excel files, supporting the construction of a national integrated model and providing yearly data on anthropogenic pollutant discharges for provincial administrative units, including both total and categorized discharges. The MEANS-ST1.0 dataset incorporates the most comprehensive spatiotemporal dynamic parameters, enabling a fine-grained analysis of the long-term dynamics for China's anthropogenic nutrient discharges from both spatial and temporal perspectives.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22121490%2F7189944f8fc292a094c90daa799d08ca%2FChatGPT%20Image%2015%20Kas%202025%2014_07_37.png?generation=1763204959770660&alt=media" alt="">
This synthetic dataset simulates 300 global cities across 6 major geographic regions, designed specifically for unsupervised machine learning and clustering analysis. It explores how economic status, environmental quality, infrastructure, and digital access shape urban lifestyles worldwide.
| Feature | Description | Range |
|---|---|---|
| 10 Features | Economic, environmental & social indicators | Realistically scaled |
| 300 Cities | Europe, Asia, Americas, Africa, Oceania | Diverse distributions |
| Strong Correlations | Income â Rent (+0.8), Density â Pollution (+0.6) | ML-ready |
| No Missing Values | Clean, preprocessed data | Ready for analysis |
| 4-5 Natural Clusters | Metropolitan hubs, eco-towns, developing centers | Pre-validated |
â
Realistic Correlations: Income strongly predicts rent (+0.8), internet access (+0.7), and happiness (+0.6)
â
Regional Diversity: Each region has distinct economic and environmental characteristics
â
Clustering-Ready: Naturally separable into 4-5 lifestyle archetypes
â
Beginner-Friendly: No data cleaning required, includes example code
â
Documented: Comprehensive README with methodology and use cases
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
# Load and prepare
df = pd.read_csv('city_lifestyle_dataset.csv')
X = df.drop(['city_name', 'country'], axis=1)
X_scaled = StandardScaler().fit_transform(X)
# Cluster
kmeans = KMeans(n_clusters=5, random_state=42)
df['cluster'] = kmeans.fit_predict(X_scaled)
# Analyze
print(df.groupby('cluster').mean())
After working with this dataset, you will be able to: 1. Apply K-Means, DBSCAN, and Hierarchical Clustering 2. Use PCA for dimensionality reduction and visualization 3. Interpret correlation matrices and feature relationships 4. Create geographic visualizations with cluster assignments 5. Profile and name discovered clusters based on characteristics
| Cluster | Characteristics | Example Cities |
|---|---|---|
| Metropolitan Tech Hubs | High income, density, rent | Silicon Valley, Singapore |
| Eco-Friendly Towns | Low density, clean air, high happiness | Nordic cities |
| Developing Centers | Mid income, high density, poor air | Emerging markets |
| Low-Income Suburban | Low infrastructure, income | Rural areas |
| Industrial Mega-Cities | Very high density, pollution | Manufacturing hubs |
Unlike random synthetic data, this dataset was carefully engineered with: - âš Realistic correlation structures based on urban research - đ Regional characteristics matching real-world patterns - đŻ Optimal cluster separability (validated via silhouette scores) - đ Comprehensive documentation and starter code
â Learn clustering without data cleaning hassles
â Practice PCA and dimensionality reduction
â Create beautiful geographic visualizations
â Understand feature correlation in real-world contexts
â Build a portfolio project with clear business insights
This dataset was designed for educational purposes in machine learning and data science. While synthetic, it reflects real patterns observed in global urban development research.
Happy Clustering! đ
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Definition of data levels defined by the BICCN. Columns are detailed definition for each specific modality profiled. (XLSX)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Racial geography studies the spatial distributions of multiracial populations. Technical challenges arise from the fact that US Census data, upon which all US-based studies rely, is only available in the form of spatial aggregates at a few levels of granularity. This negatively affects spatial analysis and, consequently, the quantification of racial segregation, especially on a smaller length scale. A recent methodology called the Racial Landscape (RL) stochastically disaggregates racial data at the level of census block aggregates into a grid of monoracial cells. RL-transformed racial data makes possible pattern-based, zoneless analysis, and visualization of racial geography. Here, we introduce the National Racial Geography Dataset 2020 (NRGD2020)âa collection of RL-based grids calculated from the 2020 census data and covering the entire conterminous US. It includes a virtual image layer for a birdâs-eye-like view visualization of the spatial distribution of racial sub-populations, numerical grids for calculating racial diversity and segregation within user-defined regions, and precalculated maps of racial diversity and segregation on various length scales. NRGD2020 aims to facilitate and extend spatial analyses of racial geography and to make it more interpretable by tightly integrating quantitative analysis with visualization (mapping).
Facebook
TwitterOverview The Office of the Geographer and Global Issues at the U.S. Department of State produces the Large Scale International Boundaries (LSIB) dataset. The current edition is version 11.4 (published 24 February 2025). The 11.4 release contains updated boundary lines and data refinements designed to extend the functionality of the dataset. These data and generalized derivatives are the only international boundary lines approved for U.S. Government use. The contents of this dataset reflect U.S. Government policy on international boundary alignment, political recognition, and dispute status. They do not necessarily reflect de facto limits of control. National Geospatial Data Asset This dataset is a National Geospatial Data Asset (NGDAID 194) managed by the Department of State. It is a part of the International Boundaries Theme created by the Federal Geographic Data Committee. Dataset Source Details Sources for these data include treaties, relevant maps, and data from boundary commissions, as well as national mapping agencies. Where available and applicable, the dataset incorporates information from courts, tribunals, and international arbitrations. The research and recovery process includes analysis of satellite imagery and elevation data. Due to the limitations of source materials and processing techniques, most lines are within 100 meters of their true position on the ground. Cartographic Visualization The LSIB is a geospatial dataset that, when used for cartographic purposes, requires additional styling. The LSIB download package contains example style files for commonly used software applications. The attribute table also contains embedded information to guide the cartographic representation. Additional discussion of these considerations can be found in the Use of Core Attributes in Cartographic Visualization section below. Additional cartographic information pertaining to the depiction and description of international boundaries or areas of special sovereignty can be found in Guidance Bulletins published by the Office of the Geographer and Global Issues: https://data.geodata.state.gov/guidance/index.html Contact Direct inquiries to internationalboundaries@state.gov. Direct download: https://data.geodata.state.gov/LSIB.zip Attribute Structure The dataset uses the following attributes divided into two categories: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | Core CC1_GENC3 | Extension CC1_WPID | Extension COUNTRY1 | Core CC2 | Core CC2_GENC3 | Extension CC2_WPID | Extension COUNTRY2 | Core RANK | Core LABEL | Core STATUS | Core NOTES | Core LSIB_ID | Extension ANTECIDS | Extension PREVIDS | Extension PARENTID | Extension PARENTSEG | Extension These attributes have external data sources that update separately from the LSIB: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | GENC CC1_GENC3 | GENC CC1_WPID | World Polygons COUNTRY1 | DoS Lists CC2 | GENC CC2_GENC3 | GENC CC2_WPID | World Polygons COUNTRY2 | DoS Lists LSIB_ID | BASE ANTECIDS | BASE PREVIDS | BASE PARENTID | BASE PARENTSEG | BASE The core attributes listed above describe the boundary lines contained within the LSIB dataset. Removal of core attributes from the dataset will change the meaning of the lines. An attribute status of âExtensionâ represents a field containing data interoperability information. Other attributes not listed above include âFIDâ, âShape_lengthâ and âShape.â These are components of the shapefile format and do not form an intrinsic part of the LSIB. Core Attributes The eight core attributes listed above contain unique information which, when combined with the line geometry, comprise the LSIB dataset. These Core Attributes are further divided into Country Code and Name Fields and Descriptive Fields. County Code and Country Name Fields âCC1â and âCC2â fields are machine readable fields that contain political entity codes. These are two-character codes derived from the Geopolitical Entities, Names, and Codes Standard (GENC), Edition 3 Update 18. âCC1_GENC3â and âCC2_GENC3â fields contain the corresponding three-character GENC codes and are extension attributes discussed below. The codes âQ2â or âQX2â denote a line in the LSIB representing a boundary associated with areas not contained within the GENC standard. The âCOUNTRY1â and âCOUNTRY2â fields contain the names of corresponding political entities. These fields contain names approved by the U.S. Board on Geographic Names (BGN) as incorporated in the â"Independent States in the World" and "Dependencies and Areas of Special Sovereignty" lists maintained by the Department of State. To ensure maximum compatibility, names are presented without diacritics and certain names are rendered using common cartographic abbreviations. Names for lines associated with the code "Q2" are descriptive and not necessarily BGN-approved. Names rendered in all CAPITAL LETTERS denote independent states. Names rendered in normal text represent dependencies, areas of special sovereignty, or are otherwise presented for the convenience of the user. Descriptive Fields The following text fields are a part of the core attributes of the LSIB dataset and do not update from external sources. They provide additional information about each of the lines and are as follows: ATTRIBUTE NAME | CONTAINS NULLS RANK | No STATUS | No LABEL | Yes NOTES | Yes Neither the "RANK" nor "STATUS" fields contain null values; the "LABEL" and "NOTES" fields do. The "RANK" field is a numeric expression of the "STATUS" field. Combined with the line geometry, these fields encode the views of the United States Government on the political status of the boundary line. ATTRIBUTE NAME | | VALUE | RANK | 1 | 2 | 3 STATUS | International Boundary | Other Line of International Separation | Special Line A value of â1â in the âRANKâ field corresponds to an "International Boundary" value in the âSTATUSâ field. Values of â2â and â3â correspond to âOther Line of International Separationâ and âSpecial Line,â respectively. The âLABELâ field contains required text to describe the line segment on all finished cartographic products, including but not limited to print and interactive maps. The âNOTESâ field contains an explanation of special circumstances modifying the lines. This information can pertain to the origins of the boundary lines, limitations regarding the purpose of the lines, or the original source of the line. Use of Core Attributes in Cartographic Visualization Several of the Core Attributes provide information required for the proper cartographic representation of the LSIB dataset. The cartographic usage of the LSIB requires a visual differentiation between the three categories of boundary lines. Specifically, this differentiation must be between: International Boundaries (Rank 1); Other Lines of International Separation (Rank 2); and Special Lines (Rank 3). Rank 1 lines must be the most visually prominent. Rank 2 lines must be less visually prominent than Rank 1 lines. Rank 3 lines must be shown in a manner visually subordinate to Ranks 1 and 2. Where scale permits, Rank 2 and 3 lines must be labeled in accordance with the âLabelâ field. Data marked with a Rank 2 or 3 designation does not necessarily correspond to a disputed boundary. Please consult the style files in the download package for examples of this depiction. The requirement to incorporate the contents of the "LABEL" field on cartographic products is scale dependent. If a label is legible at the scale of a given static product, a proper use of this dataset would encourage the application of that label. Using the contents of the "COUNTRY1" and "COUNTRY2" fields in the generation of a line segment label is not required. The "STATUS" field contains the preferred description for the three LSIB line types when they are incorporated into a map legend but is otherwise not to be used for labeling. Use of the âCC1,â âCC1_GENC3,â âCC2,â âCC2_GENC3,â âRANK,â or âNOTESâ fields for cartographic labeling purposes is prohibited. Extension Attributes Certain elements of the attributes within the LSIB dataset extend data functionality to make the data more interoperable or to provide clearer linkages to other datasets. The fields âCC1_GENC3â and âCC2_GENCâ contain the corresponding three-character GENC code to the âCC1â and âCC2â attributes. The code âQX2â is the three-character counterpart of the code âQ2,â which denotes a line in the LSIB representing a boundary associated with a geographic area not contained within the GENC standard. To allow for linkage between individual lines in the LSIB and World Polygons dataset, the âCC1_WPIDâ and âCC2_WPIDâ fields contain a Universally Unique Identifier (UUID), version 4, which provides a stable description of each geographic entity in a boundary pair relationship. Each UUID corresponds to a geographic entity listed in the World Polygons dataset. These fields allow for linkage between individual lines in the LSIB and the overall World Polygons dataset. Five additional fields in the LSIB expand on the UUID concept and either describe features that have changed across space and time or indicate relationships between previous versions of the feature. The âLSIB_IDâ attribute is a UUID value that defines a specific instance of a feature. Any change to the feature in a lineset requires a new âLSIB_ID.â The âANTECIDS,â or antecedent ID, is a UUID that references line geometries from which a given line is descended in time. It is used when there is a feature that is entirely new, not when there is a new version of a previous feature. This is generally used to reference countries that have dissolved. The âPREVIDS,â or Previous ID, is a UUID field that contains old versions of a line. This is an additive field, that houses all Previous IDs. A new version of a feature is defined by any change to the
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract INTRODUCTION: This cross-sectional study analyzed the spatial distribution of hepatitis B or C virus (HBV/HBC) and schistosomiasis coinfection. METHODS: Serum samples were collected from patients with Schistosoma mansoni infection. These were tested for serological markers of HBV/HCV infection. The spatial distribution of coinfection was analyzed using intensity kernel estimation. RESULTS: Overall, 9.4% of individuals had contact with HBV and 1.7% of samples tested positive for anti-HCV antibodies. We identified clusters of risk located in the central region. CONCLUSIONS: Spatial analysis allowed visualization of high-risk areas, leading to a definition of priority areas to be targeted for intensification of control interventions.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Node of the Institute of Statistics and Cartography of Andalusia (IECA). Regional Government of Andalusia. Service WMS 1.3.0 according to the INSPIRE profile of ISO 19128:2005 of the Geographic Code of Andalusia integrated into the Spatial Data Infrastructure of Andalusia following the guidelines of the Statistical and Cartographic System of Andalusia (SECA). Their names are in line with the INSPIRE Directive 2007/2/EC as well as their style. They are also associated with a visualization style defined by the IECA. The Geographical Nomenclature of Andalusia (NGA) is an IECA project started in 2004 with the Database of Toponyms 1:10,000 (BTA10), which contains more than 232,000 toponyms and geographical identifiers classified thematically in administrative areas, population entities, hydrography, orography, heritage, infrastructure, industrial, extractive activities, services and equipment. These place names have been georeferenced with a point geometry, which can be consulted through different reference systems. Its starting source is the toponymy included in the Topographic Map of Andalusia 1:10,000 (Digital Vectorial v1:1998-2003), basic cartography of the Autonomous Community, to which other sources are being incorporated to complete and / or normalize the toponymy of certain types of entity. The data model, based on the Spanish Nomenclature Model v1.2 recommended by the IDEE Working Group, was adapted in 2012 to the INSPIRE regulations and specifications on geographical names. The NGA is a dynamic register that is continuously updated and made available to the CEAS and the company through four synchronised public and free web services: the Geographic Name Finder and Interoperable NGA Download (WFS, WFS-INSPIRE) and Display (WMS) Services. Its purpose is to serve as a standardized reference of the toponymy of Andalusia.
Facebook
TwitterThe ECMWF Re-Analysis (ERA40) is a global atmospheric analysis of many conventional observations and satellite data streams for the period September 1957 to August 2002. There are numerous data products that are separated into dataset series' based on resolution, vertical coordinate reference, and likely research applications. Descriptions of the series organization and direct links to information about all ERA40 products are available [https://rda.ucar.educgi-bin/joey/era40sum.pl?ds=ds120.0]. This dataset contains monthly means of the ERA40 2.5 degree latitude-longitude gridded surface and single level analysis. Monthly means have been computed for each analysis hour as well as a daily mean. The ERA-Interim data from ECMWF is an update to the ERA-40 project. The ERA-Interim data starts in 1989 and has a higher horizontal resolution (T255, N128 nominally 0.703125 degrees) than the ERA-40 data (T159, N80 nominally 1.125 degrees). ERA-Interim is based on a more current model than ERA-40 and uses 4DVAR (as opposed to 3DVAR in ERA-40). ECMWF will continue to run the ERA-Interim model in near real time through at least 2010, and possibly longer. This data is available in ds627.0 [https://rda.ucar.edu/datasets/ds627.0/].
Facebook
TwitterThis layer includes Landsat 8 and 9 imagery rendered on-the-fly as NDVI Colorized for use in visualization and analysis. This layer is time enabled and includes a number of band combinations and indices rendered on demand. The imagery includes eight multispectral bands from the Operational Land Imager (OLI) and two bands from the Thermal Infrared Sensor (TIRS). It is updated daily with new imagery directly sourced from the USGS Landsat collection on AWS.Geographic CoverageGlobal Land Surface.Polar regions are available in polar-projected Imagery Layers: Landsat Arctic Views and Landsat Antarctic Views.Temporal CoverageThis layer is updated daily with new imagery.Working in tandem, Landsat 8 and 9 revisit each point on Earth's land surface every 8 days.Most images collected from January 2015 to present are included.Approximately 5 images for each path/row from 2013 and 2014 are also included.Product LevelThe Landsat 8 and 9 imagery in this layer is comprised of Collection 2 Level-1 data.The imagery has Top of Atmosphere (TOA) correction applied.TOA is applied using the radiometric rescaling coefficients provided the USGS.The TOA reflectance values (ranging 0 â 1 by default) are scaled using a range of 0 â 10,000.Image Selection/FilteringA number of fields are available for filtering, including Acquisition Date, Estimated Cloud Cover, and Product ID.To isolate and work with specific images, either use the âImage Filterâ to create custom layers or add a âQuery Filterâ to restrict the default layer display to a specified image or group of images.Visual RenderingDefault rendering is NDVI Colorized, calculated as (b5 - b4) / (b5 + b4) with a colormap applied.Raster Functions enable on-the-fly rendering of band combinations and calculated indices from the source imagery.The DRA version of each layer enables visualization of the full dynamic range of the images.Other pre-defined Raster Functions can be selected via the renderer drop-down or custom functions can be created.This layer is part of a larger collection of Landsat Imagery Layers that you can use to perform a variety of mapping analysis tasks.Pre-defined functions: Natural Color with DRA, Agriculture with DRA, Geology with DRA, Color Infrared with DRA, Bathymetric with DRA, Short-wave Infrared with DRA, Normalized Difference Moisture Index Colorized, NDVI Raw, NDVI Colorized, NBR Raw15 meter Landsat Imagery Layers are also available: Panchromatic and Pansharpened.Multispectral BandsThe table below lists all available multispectral OLI bands. NDVI Colorized consumes bands 4 and 5.BandDescriptionWavelength (”m)Spatial Resolution (m)1Coastal aerosol0.43 - 0.45302Blue0.45 - 0.51303Green0.53 - 0.59304Red0.64 - 0.67305Near Infrared (NIR)0.85 - 0.88306SWIR 11.57 - 1.65307SWIR 22.11 - 2.29308Cirrus (in OLI this is band 9)1.36 - 1.38309QA Band (available with Collection 1)*NA30*More about the Quality Assessment BandTIRS BandsBandDescriptionWavelength (”m)Spatial Resolution (m)10TIRS110.60 - 11.19100 * (30)11TIRS211.50 - 12.51100 * (30)*TIRS bands are acquired at 100 meter resolution, but are resampled to 30 meter in delivered data product.Additional Usage NotesImage exports are limited to 4,000 columns x 4,000 rows per request.This dynamic imagery layer can be used in Web Maps and ArcGIS Pro as well as web and mobile applications using the ArcGIS REST APIs.WCS and WMS compatibility means this imagery layer can be consumed as WCS or WMS services.The Landsat Explorer App is another way to access and explore the imagery.This layer is part of a larger collection of Landsat Imagery Layers.Data SourceLandsat imagery is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). Data is hosted by the Amazon Web Services as part of their Public Data Sets program.For information, see Landsat 8 and Landsat 9.
Facebook
TwitterThis layer includes Landsat 8 and 9 imagery rendered on-the-fly as Bathymetric with DRA for use in visualization and analysis. This layer is time enabled and includes a number of band combinations and indices rendered on demand. The imagery includes eight multispectral bands from the Operational Land Imager (OLI) and two bands from the Thermal Infrared Sensor (TIRS). It is updated daily with new imagery directly sourced from the USGS Landsat collection on AWS.Geographic CoverageGlobal Land Surface.Polar regions are available in polar-projected Imagery Layers: Landsat Arctic Views and Landsat Antarctic Views.Temporal CoverageThis layer is updated daily with new imagery.Working in tandem, Landsat 8 and 9 revisit each point on Earth's land surface every 8 days.Most images collected from January 2015 to present are included.Approximately 5 images for each path/row from 2013 and 2014 are also included.Product LevelThe Landsat 8 and 9 imagery in this layer is comprised of Collection 2 Level-1 data.The imagery has Top of Atmosphere (TOA) correction applied.TOA is applied using the radiometric rescaling coefficients provided the USGS.The TOA reflectance values (ranging 0 â 1 by default) are scaled using a range of 0 â 10,000.Image Selection/FilteringA number of fields are available for filtering, including Acquisition Date, Estimated Cloud Cover, and Product ID.To isolate and work with specific images, either use the âImage Filterâ to create custom layers or add a âQuery Filterâ to restrict the default layer display to a specified image or group of images.Visual RenderingDefault rendering is Bathymetric (bands 4,3,1) with Dynamic Range Adjustment (DRA), useful in bathymetric mapping applications. Raster Functions enable on-the-fly rendering of band combinations and calculated indices from the source imagery.The DRA version of each layer enables visualization of the full dynamic range of the images.Other pre-defined Raster Functions can be selected via the renderer drop-down or custom functions can be created.This layer is part of a larger collection of Landsat Imagery Layers that you can use to perform a variety of mapping analysis tasks.Pre-defined functions: Natural Color with DRA, Agriculture with DRA, Geology with DRA, Color Infrared with DRA, Bathymetric with DRA, Short-wave Infrared with DRA, Normalized Difference Moisture Index Colorized, NDVI Raw, NDVI Colorized, NBR Raw15 meter Landsat Imagery Layers are also available: Panchromatic and Pansharpened.Multispectral BandsThe table below lists all available multispectral OLI bands. Bathymetric with DRA consumes bands 4,3,1.BandDescriptionWavelength (”m)Spatial Resolution (m)1Coastal aerosol0.43 - 0.45302Blue0.45 - 0.51303Green0.53 - 0.59304Red0.64 - 0.67305Near Infrared (NIR)0.85 - 0.88306SWIR 11.57 - 1.65307SWIR 22.11 - 2.29308Cirrus (in OLI this is band 9)1.36 - 1.38309QA Band (available with Collection 1)*NA30*More about the Quality Assessment BandTIRS BandsBandDescriptionWavelength (”m)Spatial Resolution (m)10TIRS110.60 - 11.19100 * (30)11TIRS211.50 - 12.51100 * (30)*TIRS bands are acquired at 100 meter resolution, but are resampled to 30 meter in delivered data product.Additional Usage NotesImage exports are limited to 4,000 columns x 4,000 rows per request.This dynamic imagery layer can be used in Web Maps and ArcGIS Pro as well as web and mobile applications using the ArcGIS REST APIs.WCS and WMS compatibility means this imagery layer can be consumed as WCS or WMS services.The Landsat Explorer App is another way to access and explore the imagery.This layer is part of a larger collection of Landsat Imagery Layers.Data SourceLandsat imagery is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). Data is hosted by the Amazon Web Services as part of their Public Data Sets program.For information, see Landsat 8 and Landsat 9.
Facebook
TwitterThis layer includes Landsat GLS, Landsat 8, and Landsat 9 imagery for use in visualization and analysis. This layer is time enabled and includes a number band combinations and indices rendered on demand. The Landsat 8 and 9 imagery includes nine multispectral bands from the Operational Land Imager (OLI) and two bands from the Thermal Infrared Sensor (TIRS). It is updated daily with new imagery directly sourced from the USGS Landsat collection on AWS.Geographic CoverageGlobal Land Surface.Polar regions are available in polar-projected Imagery Layers: Landsat Arctic Views and Landsat Antarctic Views.Temporal CoverageThis layer is updated daily with new imagery.Together, Landsat 8 and 9 revisit each point on Earth's land surface every 8 days.Most images collected from January 2015 to present are included.Approximately 5 images for each path/row from 2013 and 2014 are also included.This layer also includes imagery from the Global Land Survey* (circa 2010, 2005, 2000, 1990, 1975).Product LevelThe Landsat 8 and 9 imagery in this layer is comprised of Collection 2 Level-1 data.The imagery has Top of Atmosphere (TOA) correction applied.TOA is applied using the radiometric rescaling coefficients provided the USGS.The TOA reflectance values (ranging 0 â 1 by default) are scaled using a range of 0 â 10,000.Image Selection/FilteringA number of fields are available for filtering, including Acquisition Date, Estimated Cloud Cover, and Product ID.To isolate and work with specific images, either use the âImage Filterâ to create custom layers or add a âLayer Filterâ to restrict the default layer display to a specified image or group of images.To isolate a specific mission, use the Layer Filter and the dataset_id or SensorName fields.Visual RenderingThe default rendering in this layer is Agriculture (bands 6,5,2) with Dynamic Range Adjustment (DRA). Brighter green indicates more vigorous vegetation.The DRA version of each layer enables visualization of the full dynamic range of the images.Rendering (or display) of band combinations and calculated indices is done on-the-fly from the source images via Raster Functions.Various pre-defined Raster Functions can be selected or custom functions can be created.Pre-defined functions: Natural Color with DRA, Agriculture with DRA, Geology with DRA, Color Infrared with DRA, Bathymetric with DRA, Short-wave Infrared with DRA, Normalized Difference Moisture Index Colorized, NDVI Raw, NDVI Colorized, NBR Raw15 meter Landsat Imagery Layers are also available: Panchromatic and Pansharpened.Multispectral Bands BandDescriptionWavelength (”m)Spatial Resolution (m)1Coastal aerosol0.43 - 0.45302Blue0.45 - 0.51303Green0.53 - 0.59304Red0.64 - 0.67305Near Infrared (NIR)0.85 - 0.88306SWIR 11.57 - 1.65307SWIR 22.11 - 2.29308Cirrus (in OLI this is band 9)1.36 - 1.38309QA Band (available with Collection 1)*NA30 *More about the Quality Assessment BandTIRS BandsBandDescriptionWavelength (”m)Spatial Resolution (m)10TIRS110.60 - 11.19100 * (30)11TIRS211.50 - 12.51100 * (30)*TIRS bands are acquired at 100 meter resolution, but are resampled to 30 meter in delivered data product.Additional Usage NotesImage exports are limited to 4,000 columns x 4,000 rows per request.This dynamic imagery layer can be used in Web Maps and ArcGIS Pro as well as web and mobile applications using the ArcGIS REST APIs.WCS and WMS compatibility means this imagery layer can be consumed as WCS or WMS services.The Landsat Explorer App is another way to access and explore the imagery.Data SourceLandsat imagery is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). Data is hosted in Amazon Web Services as part of their Public Data Sets program.For information, see Landsat 8 and Landsat 9.*The Global Land Survey includes images from Landsat 1 through Landsat 7. Band numbers and band combinations differ from those of Landsat 8, but have been mapped to the most appropriate band as in the above table. For more information about the Global Land Survey, visit GLS.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThis map visualisation service allows access to the set of information layers published in the Spatial Data Infrastructure of Navarra and that correspond to the public data of the SITNA. The Web Map Service (WMS) defined by the OGC (Open Geospatial Consortium) produces spatially referenced data maps, dynamically based on geographic information.