Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this seminar, you will learn about the spatial analysis tools built directly into the ArcGIS.com map viewer. You will learn of the spatial analysis capabilities in ArcGIS Online for Organizations, whether for analyzing your own data, data that's publicly available on ArcGIS Online, or a combination of both. You will learn the overall features and benefits of ArcGIS Online Analysis, how to get started, and how to choose the right approach in order to solve a specific spatial problem.
Have you ever wanted to create your own maps, or integrate and visualize spatial datasets to examine changes in trends between locations and over time? Follow along with these training tutorials on QGIS, an open source geographic information system (GIS) and learn key concepts, procedures and skills for performing common GIS tasks â such as creating maps, as well as joining, overlaying and visualizing spatial datasets. These tutorials are geared towards new GIS users. Weâll start with foundational concepts, and build towards more advanced topics throughout â demonstrating how with a few relatively easy steps you can get quite a lot out of GIS. You can then extend these skills to datasets of thematic relevance to you in addressing tasks faced in your day-to-day work.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will explore a variety of open-source technologies for working with geosptial data, performing spatial analysis, and undertaking general data science. The first component of the class focuses on the use of QGIS and associated technologies (GDAL, PROJ, GRASS, SAGA, and Orfeo Toolbox). The second component of the class introduces Python and associated open-source libraries and modules (NumPy, Pandas, Matplotlib, Seaborn, GeoPandas, Rasterio, WhiteboxTools, and Scikit-Learn) used by geospatial scientists and data scientists. We also provide an introduction to Structured Query Language (SQL) for performing table and spatial queries. This course is designed for individuals that have a background in GIS, such as working in the ArcGIS environment, but no prior experience using open-source software and/or coding. You will be asked to work through a series of lecture modules and videos broken into several topic areas, as outlined below. Fourteen assignments and the required data have been provided as hands-on opportunites to work with data and the discussed technologies and methods. If you have any questions or suggestions, feel free to contact us. We hope to continue to update and improve this course. This course was produced by West Virginia View (http://www.wvview.org/) with support from AmericaView (https://americaview.org/). This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G18AP00077. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. After completing this course you will be able to: apply QGIS to visualize, query, and analyze vector and raster spatial data. use available resources to further expand your knowledge of open-source technologies. describe and use a variety of open data formats. code in Python at an intermediate-level. read, summarize, visualize, and analyze data using open Python libraries. create spatial predictive models using Python and associated libraries. use SQL to perform table and spatial queries at an intermediate-level.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Discover a simple method and approach that guides everything you do with spatial analysis. Learn best practices, explore case studies, and get workflows to help you more successfully analyze your data.
Our dataset delivers unprecedented scale and diversity for geospatial AI training:
đ Massive scale: 125,000 unique 3D map sequences and locations, 57,500,000 images, 35 TB of Data, orders of magnitude larger than datasets currently used for SOTA Vision/Spatial Models.
â±ïž Constantly growing dataset: 12k new 3D Map sequences and locations monthly.
đ· Full-frame, high-res captures: OVER retains full-resolution, dynamic aspect-ratio images with complete Exif metadata (GPS, timestamp, device orientation), multiple resolutions 1920x1080 - 3840x2880, pre-computed COLMAP poses.
đ§ Global diversity: Environments span urban, suburban, rural, and natural settings across 120+ countries, capturing architectural, infrastructural, and environmental variety.
đ Rich metadata: Per-image geolocation (±3 m accuracy), timestamps, device pose, COLMAP pose; per-map calibration data (camera intrinsics/extrinsics).
đ§ Applications: Spatial Models Training, Multi-view stereo & NeRF/3DGS training, semantic segmentation, novel view synthesis, 3D object detection, geolocation, urban planning, AR/VR, autonomous navigation.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The spatial analysis software market is experiencing robust growth, driven by increasing adoption across diverse sectors. The market's value is estimated at $5 billion in 2025, demonstrating significant expansion from its historical period (2019-2024). A Compound Annual Growth Rate (CAGR) of 15% is projected from 2025 to 2033, indicating a substantial market expansion to an estimated $15 billion by 2033. Key drivers include the rising need for location intelligence in business decision-making, the increasing availability of geospatial data, and advancements in cloud computing and artificial intelligence (AI) that enhance spatial analysis capabilities. Furthermore, the integration of spatial analysis with other technologies, such as big data analytics and machine learning, is fostering innovation and expanding applications across various industries. The market is segmented by application (e.g., urban planning, environmental monitoring, transportation logistics) and by software type (e.g., GIS software, remote sensing software, spatial statistics software). Leading companies are continuously investing in research and development, leading to the emergence of more sophisticated and user-friendly solutions. Market restraints include the high cost of software licenses and implementation, the complexity of using advanced spatial analysis tools, and the shortage of skilled professionals capable of effectively leveraging these technologies. However, the expanding availability of open-source spatial analysis tools and online training programs is gradually mitigating these barriers. The regional breakdown shows strong growth across North America and Europe, fueled by significant technological advancements and substantial public and private sector investments. The Asia-Pacific region is also poised for significant expansion, driven by rapid urbanization and economic growth. The consistent growth across different segments and regions ensures long-term market stability and offers significant opportunities for both established players and new entrants. The continued convergence of spatial analysis with other technologies will remain a central theme, driving innovation and unlocking further value across numerous sectors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification. The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively. After completing this seminar you will be able to: explain how ANNs work including weights, bias, activation, and optimization. describe and explain different loss and assessment metrics and determine appropriate use cases. use the tensor data model to represent data as input for deep learning. explain how CNNs work including convolutional operations/layers, kernel size, stride, padding, max pooling, activation, and batch normalization. use PyTorch, Python, and R to prepare data, produce and assess scene classification models, and infer to new data. explain common semantic segmentation architectures and how these methods allow for pixel-level classification and how they are different from traditional CNNs. use PyTorch, Python, and R (or ArcGIS Pro) to prepare data, produce and assess semantic segmentation models, and infer to new data.
Explore how the six categories of spatial analysis can help you answer geographic questions. Navigate these questions using the spatial analysis workflow and learn how to apply it to your own projects.Goals Follow steps in the analysis workflow to solve a spatial problem. Describe the types of questions that can be answered using spatial analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The contents of this fileset come from a Spatial Analysis in R workshop run on 12th July 2013 by the British Ecological Society (BES) Macroecology and Computational Ecology special interest groups. The PDF includes links to workshop materials and further useful info. The textfiles are some extra example code for macroecological analysis.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Quantum Geographic Information Systems (QGIS) is a user friendly open source GIS software. This workshop will introduce the interface and exhibit a small portion of spatial analysis techniques QGIS offers to familiarize you with some of the basis, and to illustrate the fundamentals of GIS. The workshop is targeted for beginners.The attendees will have a hands-on practice to apply the spatial analysis tools and create a map as as a final output.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the last decade, a plethora of algorithms have been developed for spatial ecology studies. In our case, we use some of these codes for underwater research work in applied ecology analysis of threatened endemic fishes and their natural habitat. For this, we developed codes in RstudioÂź script environment to run spatial and statistical analyses for ecological response and spatial distribution models (e.g., Hijmans & Elith, 2017; Den Burg et al., 2020). The employed R packages are as follows: caret (Kuhn et al., 2020), corrplot (Wei & Simko, 2017), devtools (Wickham, 2015), dismo (Hijmans & Elith, 2017), gbm (Freund & Schapire, 1997; Friedman, 2002), ggplot2 (Wickham et al., 2019), lattice (Sarkar, 2008), lattice (Musa & Mansor, 2021), maptools (Hijmans & Elith, 2017), modelmetrics (Hvitfeldt & Silge, 2021), pander (Wickham, 2015), plyr (Wickham & Wickham, 2015), pROC (Robin et al., 2011), raster (Hijmans & Elith, 2017), RColorBrewer (Neuwirth, 2014), Rcpp (Eddelbeuttel & Balamura, 2018), rgdal (Verzani, 2011), sdm (Naimi & Araujo, 2016), sf (e.g., Zainuddin, 2023), sp (Pebesma, 2020) and usethis (Gladstone, 2022).
It is important to follow all the codes in order to obtain results from the ecological response and spatial distribution models. In particular, for the ecological scenario, we selected the Generalized Linear Model (GLM) and for the geographic scenario we selected DOMAIN, also known as Gower's metric (Carpenter et al., 1993). We selected this regression method and this distance similarity metric because of its adequacy and robustness for studies with endemic or threatened species (e.g., Naoki et al., 2006). Next, we explain the statistical parameterization for the codes immersed in the GLM and DOMAIN running:
In the first instance, we generated the background points and extracted the values of the variables (Code2_Extract_values_DWp_SC.R). Barbet-Massin et al. (2012) recommend the use of 10,000 background points when using regression methods (e.g., Generalized Linear Model) or distance-based models (e.g., DOMAIN). However, we considered important some factors such as the extent of the area and the type of study species for the correct selection of the number of points (Pers. Obs.). Then, we extracted the values of predictor variables (e.g., bioclimatic, topographic, demographic, habitat) in function of presence and background points (e.g., Hijmans and Elith, 2017).
Subsequently, we subdivide both the presence and background point groups into 75% training data and 25% test data, each group, following the method of SoberĂłn & Nakamura (2009) and Hijmans & Elith (2017). For a training control, the 10-fold (cross-validation) method is selected, where the response variable presence is assigned as a factor. In case that some other variable would be important for the study species, it should also be assigned as a factor (Kim, 2009).
After that, we ran the code for the GBM method (Gradient Boost Machine; Code3_GBM_Relative_contribution.R and Code4_Relative_contribution.R), where we obtained the relative contribution of the variables used in the model. We parameterized the code with a Gaussian distribution and cross iteration of 5,000 repetitions (e.g., Friedman, 2002; kim, 2009; Hijmans and Elith, 2017). In addition, we considered selecting a validation interval of 4 random training points (Personal test). The obtained plots were the partial dependence blocks, in function of each predictor variable.
Subsequently, the correlation of the variables is run by Pearson's method (Code5_Pearson_Correlation.R) to evaluate multicollinearity between variables (Guisan & Hofer, 2003). It is recommended to consider a bivariate correlation ± 0.70 to discard highly correlated variables (e.g., Awan et al., 2021).
Once the above codes were run, we uploaded the same subgroups (i.e., presence and background groups with 75% training and 25% testing) (Code6_Presence&backgrounds.R) for the GLM method code (Code7_GLM_model.R). Here, we first ran the GLM models per variable to obtain the p-significance value of each variable (alpha †0.05); we selected the value one (i.e., presence) as the likelihood factor. The generated models are of polynomial degree to obtain linear and quadratic response (e.g., Fielding and Bell, 1997; Allouche et al., 2006). From these results, we ran ecological response curve models, where the resulting plots included the probability of occurrence and values for continuous variables or categories for discrete variables. The points of the presence and background training group are also included.
On the other hand, a global GLM was also run, from which the generalized model is evaluated by means of a 2 x 2 contingency matrix, including both observed and predicted records. A representation of this is shown in Table 1 (adapted from Allouche et al., 2006). In this process we select an arbitrary boundary of 0.5 to obtain better modeling performance and avoid high percentage of bias in type I (omission) or II (commission) errors (e.g., Carpenter et al., 1993; Fielding and Bell, 1997; Allouche et al., 2006; Kim, 2009; Hijmans and Elith, 2017).
Table 1. Example of 2 x 2 contingency matrix for calculating performance metrics for GLM models. A represents true presence records (true positives), B represents false presence records (false positives - error of commission), C represents true background points (true negatives) and D represents false backgrounds (false negatives - errors of omission).
|
Validation set | |
Model |
True |
False |
Presence |
A |
B |
Background |
C |
D |
We then calculated the Overall and True Skill Statistics (TSS) metrics. The first is used to assess the proportion of correctly predicted cases, while the second metric assesses the prevalence of correctly predicted cases (Olden and Jackson, 2002). This metric also gives equal importance to the prevalence of presence prediction as to the random performance correction (Fielding and Bell, 1997; Allouche et al., 2006).
The last code (i.e., Code8_DOMAIN_SuitHab_model.R) is for species distribution modelling using the DOMAIN algorithm (Carpenter et al., 1993). Here, we loaded the variable stack and the presence and background group subdivided into 75% training and 25% test, each. We only included the presence training subset and the predictor variables stack in the calculation of the DOMAIN metric, as well as in the evaluation and validation of the model.
Regarding the model evaluation and estimation, we selected the following estimators:
1) partial ROC, which evaluates the approach between the curves of positive (i.e., correctly predicted presence) and negative (i.e., correctly predicted absence) cases. As farther apart these curves are, the model has a better prediction performance for the correct spatial distribution of the species (Manzanilla-Quiñones, 2020).
2) ROC/AUC curve for model validation, where an optimal performance threshold is estimated to have an expected confidence of 75% to 99% probability (De Long et al., 1988).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification.
The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively.
After completing this seminar you will be able to:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To achieve true data interoperability is to eliminate format and data model barriers, allowing you to seamlessly access, convert, and model any data, independent of format. The ArcGIS Data Interoperability extension is based on the powerful data transformation capabilities of the Feature Manipulation Engine (FME), giving you the data you want, when and where you want it.In this course, you will learn how to leverage the ArcGIS Data Interoperability extension within ArcCatalog and ArcMap, enabling you to directly read, translate, and transform spatial data according to your independent needs. In addition to components that allow you to work openly with a multitude of formats, the extension also provides a complex data model solution with a level of control that would otherwise require custom software.After completing this course, you will be able to:Recognize when you need to use the Data Interoperability tool to view or edit your data.Choose and apply the correct method of reading data with the Data Interoperability tool in ArcCatalog and ArcMap.Choose the correct Data Interoperability tool and be able to use it to convert your data between formats.Edit a data model, or schema, using the Spatial ETL tool.Perform any desired transformations on your data's attributes and geometry using the Spatial ETL tool.Verify your data transformations before, after, and during a translation by inspecting your data.Apply best practices when creating a workflow using the Data Interoperability extension.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This fileset contains example data and R scripts for a student workshop at the 2013 USIALE annual Meeting held in Austin TX. More information about this workshop is available at https://sites.google.com/site/ialestudentworkshop/
Through the Department of the Interior-Bureau of Indian Affairs Enterprise License Agreement (DOI-BIA ELA) program, BIA employees and employees of federally-recognized Tribes may access a variety of geographic information systems (GIS) online courses and instructor-led training events throughout the year at no cost to them. These online GIS courses and instructor-led training events are hosted by the Branch of Geospatial Support (BOGS) or offered by BOGS in partnership with other organizations and federal agencies. Online courses are self-paced and available year-round, while instructor-led training events have limited capacity and require registration and attendance on specific dates. This dataset does not any training where the course was not completed by the participant or where training was cancelled or otherwise not able to be completed. Point locations depict BIA Office locations or Tribal Office Headquarters. For completed trainings where a participant location was not provided a point locations may not be available. For more information on the Branch of Geospatial Support Geospatial training program, please visit:https://www.bia.gov/service/geospatial-training.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Through the cooperation of the LouisianaView consortium members and co-sponsored with the local USGS liaison, this annual workshop is offered free to everyone interested in up-to-date information on data availability for the geospatial emergency responder. This is a 4-day virtual workshop hosts speakers from multiple Federal, State and Private Response Teams, each presenting their data, websites, links, and contacts while also fielding questions live from those in attendance, proving again and again what a cohesive and informed network of geospatial responders can mean to the inhabitants and economic base within Louisiana, the Gulf of Mexico region and the Caribbean.
First and sixth grade students were pretested and posttested using a variety of mathematics outcome measures. In between the testing sessions, children received one of two kinds of spatial training, or participated in a language training control condition.
This presentation explores methodologies and establishes protocols for developing workbenches for spatial data science in research, teaching, and business applications. The objectives of this workbench are to provide:(1) An easy, efficient and customizable toolkit for spatial data analysis with newly added nodes, (2) An integration of data, methodology, and applications for spatial data science, (3) Workflow-based case studies for teaching and research in spatial social science, (4) A training base for users with no skills in GIS and advanced methodology.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Computer Vision in Geospatial Imagery market is experiencing robust growth, driven by increasing demand for accurate and efficient geospatial data analysis across various sectors. Advancements in artificial intelligence (AI), deep learning, and high-resolution imaging technologies are fueling this expansion. The market's ability to extract valuable insights from aerial and satellite imagery is transforming industries such as agriculture, urban planning, environmental monitoring, and defense. Applications range from precision agriculture using drone imagery for crop health monitoring to autonomous vehicle navigation and infrastructure inspection using high-resolution satellite data. The integration of computer vision with cloud computing platforms facilitates large-scale data processing and analysis, further accelerating market growth. We estimate the 2025 market size to be approximately $2.5 billion, exhibiting a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033. This growth is expected to continue, driven by increasing adoption of advanced analytics and the need for real-time geospatial intelligence. Several factors contribute to this positive outlook. The decreasing cost of high-resolution sensors and cloud computing resources is making computer vision solutions more accessible. Furthermore, the growing availability of large datasets for training sophisticated AI models is enhancing the accuracy and performance of computer vision algorithms in analyzing geospatial data. However, challenges remain, including data privacy concerns, the need for robust data security measures, and the complexity of integrating diverse data sources. Nevertheless, the overall market trend remains strongly upward, with significant opportunities for technology providers and users alike. The key players listedâAlteryx, Google, Keyence, and othersâare actively shaping this landscape through innovative product development and strategic partnerships.
This file contains the data set used to develop a random forest model predict background specific conductivity for stream segments in the contiguous United States. This Excel readable file contains 56 columns of parameters evaluated during development. The data dictionary provides the definition of the abbreviations and the measurement units. Each row is a unique sample described as R** which indicates the NHD Hydrologic Unit (underscore), up to a 7-digit COMID, (underscore) sequential sample month. To develop models that make stream-specific predictions across the contiguous United States, we used StreamCat data set and process (Hill et al. 2016; https://github.com/USEPA/StreamCat). The StreamCat data set is based on a network of stream segments from NHD+ (McKay et al. 2012). These stream segments drain an average area of 3.1 km2 and thus define the spatial grain size of this data set. The data set consists of minimally disturbed sites representing the natural variation in environmental conditions that occur in the contiguous 48 United States. More than 2.4 million SC observations were obtained from STORET (USEPA 2016b), state natural resource agencies, the U.S. Geological Survey (USGS) National Water Information System (NWIS) system (USGS 2016), and data used in Olson and Hawkins (2012) (Table S1). Data include observations made between 1 January 2001 and 31 December 2015 thus coincident with Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data (https://modis.gsfc.nasa.gov/data/). Each observation was related to the nearest stream segment in the NHD+. Data were limited to one observation per stream segment per month. SC observations with ambiguous locations and repeat measurements along a stream segment in the same month were discarded. Using estimates of anthropogenic stress derived from the StreamCat database (Hill et al. 2016), segments were selected with minimal amounts of human activity (Stoddard et al. 2006) using criteria developed for each Level II Ecoregion (Omernik and Griffith 2014). Segments were considered as potentially minimally stressed where watersheds had 0 - 0.5% impervious surface, 0 â 5% urban, 0 â 10% agriculture, and population densities from 0.8 â 30 people/km2 (Table S3). Watersheds with observations with large residuals in initial models were identified and inspected for evidence of other human activities not represented in StreamCat (e.g., mining, logging, grazing, or oil/gas extraction). Observations were removed from disturbed watersheds, with a tidal influence or unusual geologic conditions such as hot springs. About 5% of SC observations in each National Rivers and Stream Assessment (NRSA) region were then randomly selected as independent validation data. The remaining observations became the large training data set for model calibration. This dataset is associated with the following publication: Olson, J., and S. Cormier. Modeling spatial and temporal variation in natural background specific conductivity. ENVIRONMENTAL SCIENCE & TECHNOLOGY. American Chemical Society, Washington, DC, USA, 53(8): 4316-4325, (2019).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this seminar, you will learn about the spatial analysis tools built directly into the ArcGIS.com map viewer. You will learn of the spatial analysis capabilities in ArcGIS Online for Organizations, whether for analyzing your own data, data that's publicly available on ArcGIS Online, or a combination of both. You will learn the overall features and benefits of ArcGIS Online Analysis, how to get started, and how to choose the right approach in order to solve a specific spatial problem.