Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and ‘addition’ (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.
The full datasheet for this product is available here.The Sonoma County hydrologic data deliverables were produced in fall 2015 and winter 2016 from the countywide 2013 LiDAR data. The hydrologic products include a set of vector deliverables and a set of raster deliverables. Vector products include stream centerlines, confluence points, hydroenforcement burn locations, and watersheds. Raster products include flow direction, flow accumulation, and a hydroenforced bare earth digital elevation model (DEM). Hydroenforcement of a DEM imparts the true elevations of culverts, pipelines, and other buried passages for water into a Digital Elevation Model, creating a DEM suitable for modeling the flow of surface water.
The extent of all deliverables is all of Sonoma County, the Lake Sonoma watershed in Mendocino County, and the Lake Mendocino area. Appropriate Use: These hydrologic datasets are a mostly-automated first step in the eventual development of a 'localized' or 'LiDAR enhanced' National Hydrography Dataset (NHD). They are suitable for landscape level planning and hydrologic modeling. These data products do not contain a guarantee of accuracy or precision and – without site specific validation and/or refinement – should not be relied upon for engineering level or very fine scale decision making. Detailed Dataset Description:These hydrologic data products were produced by Quantum Spatial. Quantum Spatial used mainly automated methods to create the hydrologic data products. Quantum Spatial included a short data report with the hydrologic datasets titled Sonoma County Hydroenforcement Technical Data Report - access that report here: https://sonomaopenspace.egnyte.com/dl/nHT2fGg8TP
The individual hydrologic data products are described briefly below.
Vector Hydro Products (contained in this file gdb):
Stream Centerlines – Centerlines of streams in Sonoma County. An area of flow concentration is considered a stream if its flow accumulation (upstream catchment area) exceeds 5 acres and a clearly defined channel exists. Where possible, stream centerline names (GNIS_Name) are consistent with the NHD. Hydroenforcement Burn Locations - Line features that represent locations where hydroenforcement occurred. Confluence Points – Points that represent stream intersections (confluences).Watersheds (HUC2 through HUC16) – Watershed boundaries for nested hydrologic units from HUC 2 (region) to HUC 16 (eighth level sub-watershed). Where possible, watershed names are consistent with the NHD. Watershed mapping conventions follow those for NHD's Watershed Boundary Dataset (http://nhd.usgs.gov/wbd.html).
Raster Hydrologic Products (1-meter resolution - available at http://sonomavegmap.org/data-downloads)Hydroenforced Digital Elevation Model – The Hydroenforced DEM is the LiDAR derived (2013) bare earth DEM with contours, pipelines and other buried passages to water 'burned in', so that the DEM correctly models surface water flow.Flow Direction Rasters – Values in a flow direction raster represent one of eight directions (pixel values range from 1 to 8); No Data represents areas where there is no flow off of the pixel (sinks).Flow Accumulation Rasters – Flow accumulation is a measure of upstream catchment area. Pixel values in a flow accumulation raster represent the cumulative number of upstream pixels (in other words, the count of pixels that contribute flow to a given pixel).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time cost analysis of generating tiles for the three models (in seconds).
This digital publication, GPR 2008-1, contains geophysical data and a digital elevation model that were produced from airborne geophysical surveys conducted in 2007 for part of the western Fortymile mining district, east-central Alaska. Aeromagnetic and electromagnetic data were acquired for 250 sq miles during the helicopter-based survey. Data provided in GPR 2008-1 include processed (1) linedata ASCII database, (2) gridded files of magnetic data, a calculated vertical magnetic gradient (first vertical derivative), apparent resistivity data, and a digital elevation model, (3) vector files of data contours and flight lines, and (4) the Contractor's descriptive project report. Data are described in more detail in the "GPR2008-1Readme.pdf" and "linedata/GPR2008-1-Linedata.txt" files included on the DVD.
PyPSA-Eur is an open model dataset of the European power system at the transmission network level that covers the full ENTSO-E area. It can be built using the code provided at https://github.com/PyPSA/PyPSA-eur.
It contains alternating current lines at and above 220 kV voltage level and all high voltage direct current lines, substations, an open database of conventional power plants, time series for electrical demand and variable renewable generator availability, and geographic potentials for the expansion of wind and solar power.
Not all data dependencies are shipped with the code repository, since git is not suited for handling large changing files. Instead we provide separate data bundles to be downloaded and extracted as noted in the documentation.
This is the full data bundle to be used for rigorous research. It includes large bathymetry and natural protection area datasets.
While the code in PyPSA-Eur is released as free software under the MIT, different licenses and terms of use apply to the various input data, which are summarised below:
corine/*
Access to data is based on a principle of full, open and free access as established by the Copernicus data and information policy Regulation (EU) No 1159/2013 of 12 July 2013. This regulation establishes registration and licensing conditions for GMES/Copernicus users and can be found here. Free, full and open access to this data set is made on the conditions that:
When distributing or communicating Copernicus dedicated data and Copernicus service information to the public, users shall inform the public of the source of that data and information.
Users shall make sure not to convey the impression to the public that the user's activities are officially endorsed by the Union.
Where that data or information has been adapted or modified, the user shall clearly state this.
The data remain the sole property of the European Union. Any information and data produced in the framework of the action shall be the sole property of the European Union. Any communication and publication by the beneficiary shall acknowledge that the data were produced “with funding by the European Union”.
eez/*
Marine Regions’ products are licensed under CC-BY-NC-SA. Please contact us for other uses of the Licensed Material beyond license terms. We kindly request our users not to make our products available for download elsewhere and to always refer to marineregions.org for the most up-to-date products and services.
natura/*
EEA standard re-use policy: unless otherwise indicated, re-use of content on the EEA website for commercial or non-commercial purposes is permitted free of charge, provided that the source is acknowledged (https://www.eea.europa.eu/legal/copyright). Copyright holder: Directorate-General for Environment (DG ENV).
naturalearth/*
All versions of Natural Earth raster + vector map data found on this website are in the public domain. You may use the maps in any manner, including modifying the content and design, electronic dissemination, and offset printing. The primary authors, Tom Patterson and Nathaniel Vaughn Kelso, and all other contributors renounce all financial claim to the maps and invites you to use them for personal, educational, and commercial purposes.
No permission is needed to use Natural Earth. Crediting the authors is unnecessary.
NUTS_2013_60M_SH/*
In addition to the general copyright and licence policy applicable to the whole Eurostat website, the following specific provisions apply to the datasets you are downloading. The download and usage of these data is subject to the acceptance of the following clauses:
The Commission agrees to grant the non-exclusive and not transferable right to use and process the Eurostat/GISCO geographical data downloaded from this page (the "data").
The permission to use the data is granted on condition that: the data will not be used for commercial purposes; the source will be acknowledged. A copyright notice, as specified below, will have to be visible on any printed or electronic publication using the data downloaded from this page.
gebco/GEBCO_2014_2D.nc
The GEBCO Grid is placed in the public domain and may be used free of charge. Use of the GEBCO Grid indicates that the user accepts the conditions of use and disclaimer information given below.
Users are free to:
Copy, publish, distribute and transmit The GEBCO Grid
Adapt The GEBCO Grid
Commercially exploit The GEBCO Grid, by, for example, combining it with other information, or by including it in their own product or application
Users must:
Acknowledge the source of The GEBCO Grid. A suitable form of attribution is given in the documentation that accompanies The GEBCO Grid.
Not use The GEBCO Grid in a way that suggests any official status or that GEBCO, or the IHO or IOC, endorses any particular application of The GEBCO Grid.
Not mislead others or misrepresent The GEBCO Grid or its source.
je-e-21.03.02.xls
Information on the websites of the Federal Authorities is accessible to the public. Downloading, copying or integrating content (texts, tables, graphics, maps, photos or any other data) does not entail any transfer of rights to the content.
Copyright and any other rights relating to content available on the websites of the Federal Authorities are the exclusive property of the Federal Authorities or of any other expressly mentioned owners.
Any reproduction requires the prior written consent of the copyright holder. The source of the content (statistical results) should always be given.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Building model (LOD0.4) Basic data from the area multipurpose map (FMZK) - the digital city map of Vienna - as vector data: https://www.wien.gv.at/urban development/urban survey/geodaten/fmzk/index.html
More information on the data status in the road area and in the interior of the road blocks can be found in the data set.Multipurpose map sheet information 1000 Vienna. "https://www.data.gv.at/katalog/dataset/b2d17060-b2f4-4cd7-a2e5-64beccfeb4c1">https:///www.data.gv.at/katalog/dataset/b2d17060-b2f4-4cd7-a2e5-64beccfeb4c1</a.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time cost analysis of rendering tiles for the three models (in seconds).
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
A necessary component of understanding vector-borne disease risk is the accurate characterization of the distributions of their vectors. Species distribution models have been successfully applied to data-rich species but may produce inaccurate results for sparsely-documented vectors. In light of global change, vectors that are currently not well-documented could become increasingly important, requiring tools to predict their distributions. One way to achieve this could be to leverage data on related species to inform the distribution of a sparsely-documented vector based on the assumption that the environmental niches of related species are not independent. Relatedly, there is a natural dependence of the spatial distribution of a disease on the spatial dependence of its vector. Here, we propose to exploit these correlations by fitting a hierarchical model jointly to data on multiple vector species and their associated human diseases to improve distribution models of sparsely-documented species. To demonstrate this approach, we evaluated the ability of twelve models—which differed in their pooling of data from multiple vector species and inclusion of disease data—to improve distribution estimates of sparsely-documented vectors. We assessed our models on two simulated data sets, which allowed us to generalize our results and examine their mechanisms. We found that when the focal species is sparsely documented, incorporating data on related vector species reduces uncertainty and improves accuracy by reducing overfitting. When data on vector species are already incorporated, disease data only marginally improve model performance. However, when data on other vectors are not available, disease data can improve model accuracy and reduce overfitting and uncertainty. We then assessed the approach on empirical data on ticks and tick-borne diseases in Florida and found that incorporating data on other vector species improved model performance. This study illustrates the value of exploiting correlated data via joint modeling to improve distribution models of data-limited species. Methods Vector Data Vector presence data were obtained from VectorMap and iNaturalist. Only iNaturalist data considered “research grade” were included, and we removed duplicates. To obtain absence data, we referenced VectorMap publications and assumed that if a species was not reported at a sampling location, but was included within the study, that the species was absent at that location. To avoid conflating low sampling effort with low vector presence, we based pseudo-absence locations on presence locations from chiggers, fleas, and mites from both databases and the Global Biodiversity Information Facility. We used a 1:1 ratio of presence to absence points, which produces the most accurate predicted distribution for regression techniques (Barbet-Massin et al., 2012). We artificially sparsely sampled one species within our empirical data (A. maculatum) by including 20% of available presence-absence data in our training set and withholding the rest for testing. The artificial sparse sampling allowed for a robust testing data set to evaluate model performances. To ensure spatial independence between our training and testing data, data were split using the blockCV package (Valavi et al., 2018) in R Version 2023.03.0+386 (R Core Team, 2023). To test the limitations of incorporating disease data, we selected a vector species that does not transmit any of the diseases within our model as our focal species. Empirical sample sizes are given in Supp Table 2. Human Disease Data We obtained annual incidence data on three human diseases (anaplasmosis, ehrlichiosis, Lyme disease) from 2011 to 2019 for each county from the Florida Department of Health. We translated this into human disease presence data in a given county in a given year based on whether the annual incidence there was greater than zero. Covariate data We modeled vector distributions as a function of environmental covariates, which have been linked to tick presence: land cover (Randolph, 2000), 30-year average maximum temperature (Ogden et al., 2020), 30-year average precipitation (Ogden et al., 2020), regional Palmer hydrological drought index (Jones and Kitron, 2000), normalized differential vegetation index (Randolph, 2000), and distance to the nearest waterbody (Kahl and Alidousti, 1997). We obtained landcover data from Global Land Cover Characteristics Database (Loveland et al., 2000), 30-year average climate data from WorldClim (Fick and Hijmans, 2017), Palmer Hydrological Drought Index from NOAA (Bushra and Rohli, 2017), and Normalized Difference Vegetation Index data from USGS Landsat (Vermote et al., 2016). Finally, we obtained waterbody data from the World Wildlife Foundation’s Global Lakes and Wetlands database (McGwire and Fisher, 2001). Pathogen circulation was based on Companion Animal Parasitic Council data, which reports the seroprevalence in canines receiving veterinary treatment. To avoid considering imported cases as indicative of endemicity, we considered a threshold of five annual cases to signal transmission. Finally, to account for under-reporting (Madison-Antenucci, et al., 2020), we modeled reporting probability as a function of health insurance coverage and population size. Insurance data were obtained from County Health Rankings (www.countyhealthrankings.org), and population data were obtained from WorldPop (www.worldpop.org). Simulated data Our first simulation simulates data for three well-documented species (A. americanum, A. maculatum, D. variabilis) and a single sparsely-documented species (I. scapularis). “Well-documented” is defined as 500 samples and “sparsely-documented” is defined as 30 samples (Supp Figure 2). Our second simulation simulates all four species as well-documented (Supp Table 3).
The lidar 10m Vector Ruggedness Measure is the primary 10m Vector Ruggedness Measure data product produced and distributed by the National Park Service, Great Smoky Mountains National Park.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global vector database market is anticipated to reach a value of 20.05 billion in 2033, exhibiting a CAGR of 23.7% from 2025 to 2033. The rising adoption of artificial intelligence (AI) and machine learning (ML) technologies, particularly in the BFSI, retail and e-commerce, healthcare and life sciences, and IT and ITeS sectors, is a major driver of market growth. Furthermore, the increasing need for efficient data storage and retrieval in a variety of applications, such as natural language processing (NLP), computer vision, and recommendation systems, is further boosting market expansion. The Asia Pacific region is expected to hold a significant share of the vector database market, with key countries such as China, India, and Japan contributing to its growth. The region's burgeoning IT and ITeS sector, as well as its rapidly growing e-commerce market, are driving the demand for vector databases. Additionally, government initiatives in various countries aimed at promoting AI adoption are creating favorable conditions for market growth. The presence of major technology companies in the region, such as Alibaba Cloud, Pinecone Systems, and Zilliz, is also contributing to the market's expansion. This report provides an in-depth analysis of the Vector Database Market, a rapidly growing segment of the database industry valued at USD 1.5 billion in 2023 and projected to reach USD 10.2 billion by 2028, exhibiting a CAGR of 36.1% during the forecast period. Recent developments include: In June 2024, Salesforce, Inc. announced the general availability of the Data Cloud Vector Database, designed to help businesses unify and leverage the 90% of customer data trapped in unstructured formats, such as PDFs, emails, and transcripts. This innovation enables businesses to cost-effectively deliver transformative and integrated customer experiences across service, sales, marketing, AI, automation, and analytics , In June 2024, Oracle launched HeatWave GenAI, the first in-database large language model, scale-out vector processing, automated in-database vector store, and contextual natural language conversations informed by unstructured content. These capabilities let customers apply generative AI to enterprise data without moving data to a separate vector database or needing AI expertise , In April 2024, Vultr partnered with Qdrant, an advanced vector database technology provider, through their Cloud Alliance program to enhance cloud infrastructure and support the growing AI ecosystem. This collaboration combines Qdrant's innovative technology with Vultr's global platform, offering seamless scalability and performance for vector search workloads .
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper develops a Bayesian variant of global vector autoregressive (B-GVAR) models to forecast an international set of macroeconomic and financial variables. We propose a set of hierarchical priors and compare the predictive performance of B-GVAR models in terms of point and density forecasts for one-quarter-ahead and four-quarter-ahead forecast horizons. We find that forecasts can be improved by employing a global framework and hierarchical priors which induce country-specific degrees of shrinkage on the coefficients of the GVAR model. Forecasts from various B-GVAR specifications tend to outperform forecasts from a naive univariate model, a global model without shrinkage on the parameters and country-specific vector autoregressions.
Inputs and output files for the regional model used for the air quality conformity analysis approved in March 2020. They were developed by the Chicago Metropolitan Agency for Planning and cover the modeled region, including portions of Wisconsin, Illinois and Indiana.
This packaged data collection contains all of the outputs from our primary model, including the following data layers: Habitat Cores (vector polygons) Least-cost Paths (vector lines) Least-cost Corridors (raster) Least-cost Corridors (vector polygon interpretation) Modeling Extent (vector polygon) Please refer to the embedded spatial metadata and the information in our full report for details on the development of these data layers. Packaged data are available in two formats: Geodatabase (.gdb): A related set of file geodatabase rasters and feature classes, packaged in an ESRI file geodatabase. ArcGIS Pro Map Package (.mpkx): The same data included in the geodatabase, presented as fully-symbolized layers in a map. Note that you must have ArcGIS Pro version 2.0 or greater to view. See Cross-References for links to individual datasets, which can be downloaded in shapefile (.shp) or raster GeoTIFF (.tif) formats.
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
This zip should contain 4 files: - README.txt (this file) - doc2Dep20MWU57k_1000concat2000.tab - doc2Dep20MWU57k_1000concat2000.txt - doc2Dep20MWU57k_1000concat2000.mat ****doc2Dep20MWU57k_1000concat2000.tab**** This file contains the 54975 word-units with POS tags. The order of the words in this file corresponds to the order of the rows in doc2Dep20MWU57k_1000concat2000.tab ****doc2Dep20MWU57k_1000concat2000.tab**** This tab-separated-value file contains the concatenated SVD matrices as created described in "Documents and Dependencies: an Exploration of Vector Space Models for Semantic Composition"(Fyshe 2013). The size of the matrix is 54975x2000. The first 1000 dimensions are Document dimensions, the second 1000 (1001-2000) are Dependency dimensions. The rows appear in the same order as the word-units in doc2Dep20MWU57k_1000concat2000.txt ****doc2Dep20MWU57k_1000concat2000.mat**** For convenience, this is the data contained in doc2Dep20MWU57k_1000concat2000.tab &
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This record contains the YAML files, data files, and some of the plotting scripts for: "Global fits of vector-mediated s-channel simplified models for scalar and fermionic dark matter with GAMBIT". The paper can be found at https://arxiv.org/abs/2209.13266.
Samples have been created using GAMBIT and figures can be reproduced with pippi.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Model transformation languages are special-purpose languages, which are designed to define transformations as comfortably as possible, i.e., often in a declarative way. With the increasing use of transformations in various domains, the complexity and size of input models are also increasing. However, developers often lack suitable models for performance testing. We have therefore conducted experiments in which we predict the performance of model transformations based on characteristics of input models using machine learning approaches. This dataset contains our raw and processed input data, the scripts necessary to repeat our experiments, and the results we obtained.
Our input data consists of the time measurements for six different transformations defined in the Atlas Transformation Language (ATL), as well as the collected characteristics of the real-world input models that were transformed. We provide the script that implements our experiments. We predict the execution time of ATL transformations using the machine learning approaches linear regression, random forests and support vector regression using a radial basis function kernel. We also investigate different sets of characteristics of input models as input for the machine learning approaches. These are described in detail in the provided documentation.pdf. The results of the experiments are provided as raw data in individual cvs files. Additionally, we calculated the mean absolute percentage error in % and the 95th percentile of the absolute percentage error in % for each experiment and provide these results. Furthermore, we provide our Eclipse plugin, which collects the characteristics for a set of given models, the Java projects used to measure the execution time of the transformations, and other supporting scripts, e.g. for the analysis of the results.
A short introduction with a quick start guide can be found in README.md and a detailed documentation in documentaion.pdf.
http://inspire.ec.europa.eu/metadata-codelist/ConditionsApplyingToAccessAndUse/noConditionsApplyhttp://inspire.ec.europa.eu/metadata-codelist/ConditionsApplyingToAccessAndUse/noConditionsApply
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the statistical Units of the Netherlands according to the INSPIRE data model for Statistical Units version 3.0 It contains the following SU-types: neighborhood, district, municipality, province, part of the country, nuts1, nuts2, nuts3, corop area and gdregion. You can filter them out with the StatisticalTessellation attribute, as long as the SU-vector data model has no SU-Type attribute.
We present a novel and general methodology for modeling time-varying vector autoregressive processes which are widely used in many areas such as modeling of chemical processes, mobile communication channels and biomedical signals. In the literature, most work utilize multivariate Gaussian models for the mentioned applications, mainly due to the lack of efficient analytical tools for modeling with non-Gaussian distributions. In this paper, we propose a particle filtering approach which can model non-Gaussian autoregressive processes having cross-correlations among them. Moreover, time-varying parameters of the process can be modeled as the most general case by using this sequential Bayesian estimation method. Simulation results justify the performance of the proposed technique, which potentially can model also Gaussian processes as a sub-case.
Dobson & Auld AmNat codeAll code (in R), including graphing. Tinn-R file, but should open in Notepad, Wordpad, etc..
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The NASA Scatterometer (NSCAT) Level 2 ocean wind vectors in 50 km wind vector cell (WVC) swaths contain daily data from ascending and descending passes. Wind vectors are accurate to within 2 m/s (vector speed) and 20 degrees (vector direction). Wind vectors are not considered valid in rain contaminated regions; rain flags and precipitation information are not provided. Data is flagged where measurements are either missing, ambiguous, or contaminated by land/sea ice. Winds are calculated using the NSCAT-2 model function. This is the most up-to-date version, which designates the final phase of calibration, validation and science data processing, which was completed in November of 1998, on behalf of the JPL NSCAT Project; wind vectors are processed using the NSCAT-2 geophysical model function.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and ‘addition’ (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.