Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The National Transit Map - Routes dataset was compiled on June 02, 2025 from the Bureau of Transportation Statistics (BTS) and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics (BTS) National Transportation Atlas Database (NTAD). The National Transit Map (NTM) is a nationwide catalog of fixed-guideway and fixed-route transit service in America. It is compiled using General Transit Feed Specification (GTFS) Schedule data. The NTM Routes dataset shows transit routes, which is a group of trips that are displayed to riders as a single service. To display the route alignment and trips for each route, this dataset combines the following GTFS files: routes.txt, trips.txt, and shapes.txt. The GTFS Schedule documentation is available at, https://gtfs.org/schedule/. To improve the spatial accuracy of the NTM Routes, the Bureau of Transportation Statistics (BTS) adjusts transit routes using context from the submitted GTFS source data and/or from other publicly available information about the transit service. A data dictionary, or other source of attribute information, is accessible at https://doi.org/10.21949/1529048
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
http://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitationshttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitations
AdminVector is the vector data set of Belgian administrative and statistical units. It includes various classes. First class: Belgian statistic sectors as defined by the FPS Economy. Second class: municipal sections, with no unanimous definition. The five following classes correspond to official administrative units as managed by the FPS Finance. Other classes are added to these classes, like border markers or the Belgian maritime zone. The boundaries of the seven first classes are consolidated together in order to keep the topological cohrence of the objects. This data set can be freely downloaded.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SWOT Level 2 River Single-Pass Vector Data Product (SWOT_L2_HR_RiverSP_D) provides hydrologic measurements for predefined river reaches and nodes, derived from high-resolution radar observations collected by the Ka-band Radar Interferometer (KaRIn) aboard the SWOT satellite. This product reports water surface elevation, slope, width, area, and discharge estimates for each reach, along with corresponding node-level details. All features are defined by the Prior River Database (PRD), which encodes river geometry and topology across global basins.
Each granule covers a single satellite pass over one or more continents and includes two ESRI shapefiles: one for river reaches (as polylines) and one for nodes (as points). Shapefile attributes include both SWOT-derived measurements and metadata from the PRD. Water surface elevations are referenced to the WGS84 ellipsoid and are corrected for geoid height and solid Earth, load, and pole tides. Measurements are aggregated from lower-level pixel detections (PIXC product) assigned to hydrologic features via the auxiliary PIXCVec product. The product also includes consensus and algorithm-specific river discharge estimates, both unconstrained and constrained by historical gauge data.
The RiverSP product provides reach-scale hydrologic variables suitable for analyzing inland water dynamics, estimating discharge, and monitoring river changes over time. It enables direct integration with the PRD-defined river network and supports applications in large-scale hydrologic modeling, basin monitoring, and water resource management. Data are distributed in shapefile format with metadata and attribute definitions aligned to GIS and hydrologic standards.
This collection is a sub-collection of its parent: https://podaac.jpl.nasa.gov/dataset/SWOT_L2_HR_RiverSP_D It contains only river reaches.
Facebook
TwitterThe Means of Transportation to Work dataset was compiled using information from December 31, 2023 and updated December 12, 2024 from the Bureau of Transportation Statistics (BTS) and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics (BTS) National Transportation Atlas Database (NTAD). The Means of Transportation to Work table from the 2023 American Community Survey (ACS) 5-year estimates was joined to 2023 tract-level geographies for all 50 States, District of Columbia and Puerto Rico provided by the Census Bureau. A new file was created that combines the demographic variables from the former with the cartographic boundaries of the latter. The national level census tract layer contains data on the number and percentage of commuters (workers 16 years and over) that used various transportation modes to get to work.
Facebook
TwitterIntending to cover the existing gap regarding behavioral datasets modelling interactions of users with individual a multiple devices in Smart Office to later authenticate them continuously, we publish the following collection of datasets, which has been generated after having five users interacting for 60 days with their personal computer and mobile devices. Below you can find a brief description of each dataset.Dataset 1 (2.3 GB). This dataset contains 92975 vectors of features (8096 per vector) that model the interactions of the five users with their personal computers. Each vector contains aggregated data about keyboard and mouse activity, as well as application usage statistics. More info about features meaning can be found in the readme file. Originally, the number of features of this dataset was 24 065 but after filtering the constant features, this number was reduced to 8096. There was a high number of constant features to 0 since each possible digraph (two keys combination) was considered when collecting the data. However, there are many unusual digraphs that the users never introduced in their computers, so these features were deleted in the uploaded dataset.Dataset 2 (8.9 MB). This dataset contains 61918 vectors of features (15 per vector)that model the interactions of the five users with their mobile devices. Each vector contains aggregated data about application usage statistics. More info about features meaning can be found in the readme file.Dataset 3 (28.9 MB). This dataset contains 133590vectors of features (42 per vector)that model the interactions of the five users with their mobile devices. Each vector contains aggregated data about the gyroscope and Accelerometer sensors.More info about features meaning can be found in the readme file.Dataset 4 (162.4 MB). This dataset contains 145465vectors of features (241 per vector)that model the interactions of the five users with both personal computers and mobile devices. Each vector contains the aggregation of the most relevant features of both devices. More info about features meaning can be found in the readme file.Dataset 5 (878.7 KB). This dataset is composed of 7 datasets. Each one of them contains an aggregation of feature vectors generated from the active/inactive intervals of personal computers and mobile devices by considering different time windows ranging from 1h to 24h.1h: 4074 vectors2h: 2149 vectors3h: 1470 vectors4h: 1133 vectors6h: 770 vectors12h: 440 vectors24h: 229 vectors
Facebook
Twitter
According to our latest research, the global Vector Embedding API market size reached USD 1.42 billion in 2024, reflecting robust adoption across diverse industries. The market is forecasted to expand at a Compound Annual Growth Rate (CAGR) of 27.8% from 2025 to 2033, with the market size expected to reach USD 13.81 billion by 2033. This remarkable growth is primarily driven by the surging demand for advanced artificial intelligence (AI) and machine learning (ML) solutions that rely on vector embeddings to enhance natural language understanding, recommendation systems, and real-time data analytics. As per our latest research, the integration of these APIs into enterprise workflows is accelerating digital transformation and unlocking new business value across sectors.
The primary growth factor for the Vector Embedding API market is the exponential rise in unstructured data generation and the need to derive actionable insights from it. Organizations across industries are increasingly leveraging AI-powered solutions to process, analyze, and interpret vast volumes of text, images, and videos. Vector embedding APIs play a pivotal role in converting this unstructured data into high-dimensional vectors, enabling more sophisticated semantic understanding and search capabilities. The proliferation of digital channels, IoT devices, and social media platforms has further fueled the demand for scalable and efficient vector-based data processing, empowering businesses to enhance customer experiences and optimize operational efficiency. This trend is expected to continue as enterprises seek to gain a competitive edge through intelligent automation and data-driven decision-making.
Another significant driver is the rapid evolution of machine learning and deep learning frameworks, which has led to the development of more powerful and versatile vector embedding algorithms. The adoption of these advanced algorithms in APIs has democratized access to cutting-edge AI capabilities, allowing organizations of all sizes to integrate semantic search, personalized recommendations, and anomaly detection into their applications. The growing ecosystem of open-source tools and cloud-based ML platforms has made it easier for developers to implement vector embeddings without deep expertise in data science. Additionally, the increasing availability of pre-trained models and APIs from leading technology vendors is reducing time-to-market and lowering the barriers to entry for AI adoption. This democratization is expected to drive widespread implementation of vector embedding APIs across both established enterprises and startups.
Furthermore, the rise of industry-specific use cases and regulatory compliance requirements are propelling the adoption of vector embedding APIs. In sectors such as BFSI, healthcare, and e-commerce, these APIs are being utilized to enhance fraud detection, improve patient outcomes, and deliver hyper-personalized shopping experiences. Regulatory mandates around data privacy and security are prompting organizations to seek solutions that can efficiently process sensitive information while maintaining compliance. Vector embedding APIs offer the scalability, flexibility, and security features needed to meet these demands, making them an integral part of modern AI-driven business strategies. As industries continue to embrace digital transformation and AI-driven innovation, the market for vector embedding APIs is poised for sustained growth.
Text Embedding Models are becoming increasingly integral to the development and deployment of vector embedding APIs. These models are designed to convert text into numerical vectors, capturing semantic meaning and context that can be utilized in various AI applications. By leveraging text embedding models, organizations can enhance the performance of natural language processing tasks, such as sentiment analysis and entity recognition. The ability of these models to understand and represent complex linguistic patterns is driving their adoption across industries, enabling more accurate and efficient processing of unstructured text data. As the demand for sophisticated AI solutions grows, the role of text embedding models in powering advanced vector embedding APIs is expected to expand, offering new opportunities for innovation and application.
Regionally,
Facebook
Twitter
According to our latest research, the global vector search platform market size reached USD 1.94 billion in 2024, demonstrating robust momentum driven by the proliferation of unstructured data and the increasing adoption of AI-powered search solutions. The market is forecasted to grow at a CAGR of 25.7% from 2025 to 2033, reaching a projected value of USD 14.77 billion by 2033. The key growth factor underpinning this surge is the widespread integration of vector search technologies across industries seeking enhanced data retrieval, semantic search, and recommendation capabilities in the era of big data and artificial intelligence.
The primary growth driver for the vector search platform market is the exponential increase in unstructured data generated by enterprises, consumers, and IoT devices. Organizations are increasingly challenged to extract actionable insights from vast datasets comprising text, images, audio, and video. Traditional keyword-based search methods are proving inadequate for such complex data, prompting a shift towards vector search platforms that leverage machine learning and deep learning models to understand contextual meaning and semantic relationships. This technological evolution enables businesses to deliver more relevant search results, improve customer experiences, and support advanced applications such as recommendation engines, fraud detection, and personalized content delivery.
Another significant factor fueling the expansion of the vector search platform market is the surge in adoption of artificial intelligence and machine learning across verticals. As businesses strive to enhance operational efficiency and gain competitive advantages, AI-powered vector search solutions are becoming indispensable. These platforms facilitate real-time data analysis, semantic search, and natural language processing, which are critical for sectors like healthcare, BFSI, e-commerce, and media. The integration of vector search with cloud infrastructure further accelerates deployment, scalability, and accessibility, allowing organizations of all sizes to harness the power of advanced search capabilities without substantial upfront investment in hardware or specialized expertise.
The market is also benefiting from the growing need for highly personalized user experiences and intelligent automation. In retail and e-commerce, for instance, vector search platforms enable precise product recommendations and improved customer journey mapping by understanding user intent beyond simple keywords. In healthcare, these platforms assist in medical research, diagnostics, and patient data management by enabling semantic search across vast repositories of clinical information. The increasing focus on digital transformation and data-driven decision-making across industries is expected to sustain high demand for vector search platforms throughout the forecast period.
Regionally, North America remains the largest market for vector search platforms, owing to its advanced digital infrastructure, early adoption of AI technologies, and the presence of leading technology providers. However, Asia Pacific is witnessing the fastest growth, propelled by rapid digitalization, expanding internet user base, and increasing investments in AI and cloud computing. Europe also represents a significant share, driven by stringent data regulations and a strong focus on research and innovation. The Middle East & Africa and Latin America are emerging markets, gradually embracing vector search solutions as part of broader digital transformation initiatives. Overall, the global vector search platform market is poised for substantial growth, with regional dynamics shaped by varying levels of technological maturity and industry adoption.
The vector search platform market is segmented by component into software and services. The software segment dominates the market, accounting for the largest share due to the critical role of adva
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic data for assessing and comparing local post-hoc explanation of detected process shift
DOI
10.5281/zenodo.15000635
Synthetic dataset contains data used in experiment described in article submitted to Computers in Industry journal entitled
Assessing and Comparing Local Post-hoc Explanation for Shift Detection in Process Monitoring.
The citation will be updated immediately after the article will be accepted.
Particular data.mat files are stored in a subfolder structure, which clearly assigns the particular file to
on of the tested cases.
For example, data for experiments with normally distributed data, known number of shifted variables and 5 variables are stored in path ormal\known_number\5_vars\rho0.1.
The meaning of particular folders is explained here:
normal - all variables are normally distributed
not-normal - copula based multivariate distribution based on normal and gamma marginal distributions and defined correlation
known_number - known number of shifted variables (methods used this information, which is not available in real world)
unknown_number - unknown number of shifted variables, realistic case
2_vars - data with 2 variables (n=2)
...
10_vars - data with 10 variables (n=2)
rho0.1 - correlation among all variables is 0.1
...
rho0.9 - correlation among all variables is 0.9
Each data.mat file contains the following variables:
LIME_res nval x n results of LIME explanation
MYT_res nval x n results of MYT explanation
NN_res nval x n results of ANN explanation
X p x 11000 Unshifted data
S n x n sigma matrix (covariance matrix) for the unshifted data
mu 1xn mean parameter for the unshifted data
n 1x1 number of variables (dimensionality)
trn_set n x ntrn x 2 train set for ANN explainer,
trn_set(:,:,1) are values of variables from shifted process
trn_set(:,:,2) labels denoting which variables are shifted
trn_set(i,j,2) is 1 if ith variable of jth sample trn_set(:,j,1) is shifted
val_set n x 95 x 2 validation set used for testing and generating LIME_res, MYT_res and NN_res
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset names, number of data samples, data dimension and number of classes of the 6 benchmark datasets.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Word vectorization is an emerging text-as-data method that shows great promise for automating the analysis of semantics – here, the cultural meanings of words – in large volumes of text. Yet successes with this method have largely been confined to massive corpora where the meanings of words are presumed to be fixed. In political science applications, however, many corpora are comparatively small and many interesting questions hinge on the recognition that meaning changes over time. Together, these two facts raise vexing methodological challenges. Can word vectors trace the changing cultural meanings of words in typical small corpora use cases? I test four time-sensitive implementations of word vectors (word2vec) against a gold standard developed from a modest dataset of 161 years of newspaper coverage. I find that one implementation method clearly outperforms the others in matching human assessments of how public dialogues around equality in America have changed over time. In addition, I suggest best practices for using word2vec to study small corpora for time series questions, including bootstrap resampling of documents and pre-training of vectors. I close by showing that word2vec allows granular analysis of the changing meaning of words, an advance over other common text-as-data methods for semantic research questions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data set and R script are used in the study entitled "Diversity of vector-dispersed microbes peaks at a landscape-defined intermediate rate of dispersal".
Facebook
TwitterThe Core Based Statistical Areas dataset was updated on September 22, 2025 from the U.S. Department of Commerce, U.S. Census Bureau, Geography Division and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics (BTS) National Transportation Atlas Database (NTAD). This resource is a member of a series. The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) System (MTS). The MTS represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line shapefile is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Metropolitan and Micropolitan Statistical Areas are together termed Core Based Statistical Areas (CBSAs) and are defined by the Office of Management and Budget (OMB) and consist of the county or counties or equivalent entities associated with at least one urban core of at least 10,000 population, plus adjacent counties having a high degree of social and economic integration with the core as measured through commuting ties with the counties containing the core. Categories of CBSAs are: Metropolitan Statistical Areas, based on urban areas of 50,000 or more population; and Micropolitan Statistical Areas, based on urban areas of at least 10,000 population but less than 50,000 population. The CBSA boundaries are those defined by OMB based on the 2020 Census and published in 2023. A data dictionary, or other source of attribute information, is accessible at https://doi.org/10.21949/1529014
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
READ ME
Welcome to the Universal Binary Principle (UBP) Dictionary System - Version 2
Author: Euan Craig, New Zealand 2025
Embark on a revolutionary journey with Version 2 of the UBP Dictionary System, a cutting-edge Python notebook that redefines how words are stored, analyzed, and visualized! Built for Kaggle, this system encodes words as multidimensional hexagonal structures in custom .hexubp files, leveraging sophisticated mathematics to integrate binary toggles, resonance frequencies, spatial coordinates, and more, all rooted in the Universal Binary Principle (UBP). This is not just a dictionary—it’s a paradigm shift in linguistic representation.
What is the UBP Dictionary System? The UBP Dictionary System transforms words into rich, vectorized representations stored in custom .hexubp files—a JSON-based format designed to encapsulate a word’s multidimensional UBP properties. Each .hexubp file represents a word as a hexagonal structure with 12 vertices, encoding: * Binary Toggles: 6-bit patterns capturing word characteristics. * Resonance Frequencies: Derived from the Schumann resonance (7.83 Hz) and UBP Pi (~2.427). * Spatial Vectors: 6D coordinates positioning words in a conceptual “Bitfield.” * Cultural and Harmonic Data: Contextual weights, waveforms, and harmonic properties.
These .hexubp files are generated, managed, and visualized through an interactive Tkinter-based interface, making the system a powerful tool for exploring language through a mathematical lens.
Unique Mathematical Foundation The UBP Dictionary System is distinguished by its deep reliance on mathematics to model language: * UBP Pi (~2.427): A custom constant derived from hexagonal geometry and resonance alignment (calculated as 6/2 * cos(2π * 7.83 * 0.318309886)), serving as the system’s foundational reference. * Resonance Frequencies: Frequencies are computed using word-specific hashes modulated by UBP Pi, with validation against the Schumann resonance (7.83 Hz ± 0.078 Hz), grounding the system in physical phenomena. * 6D Spatial Vectors: Words are positioned in a 6D Bitfield (x, y, z, time, phase, quantum state) based on toggle sums and frequency offsets, enabling spatial analysis of linguistic relationships. * GLR Validation: A non-corrective validation mechanism flags outliers in binary, frequency, and spatial data, ensuring mathematical integrity without compromising creativity.
This mathematical rigor sets the system apart from traditional dictionaries, offering a framework where words are not just strings but dynamic entities with quantifiable properties. It’s a fusion of linguistics, physics, and computational theory, inviting users to rethink language as a multidimensional phenomenon.
Comparison with Other Data Storage Mechanisms The .hexubp format is uniquely tailored for UBP’s multidimensional model. Here’s how it compares to other storage mechanisms, with metrics to highlight its strengths: CSV/JSON (Traditional Dictionaries): * Structure: Flat key-value pairs (e.g., word:definition). * Storage: ~100 bytes per word for simple text (e.g., “and”:“conjunction”). * Query Speed: O(1) for lookups, but no support for vector operations. * Limitations: Lacks multidimensional data (e.g., spatial vectors, frequencies). * .hexubp Advantage: Stores 12 vertices with vectors (~1-2 KB per word), enabling complex analyses like spatial clustering or frequency drift detection.
Relational Databases (SQL): * Structure: Tabular, with columns for word, definition, etc. * Storage: ~200-500 bytes per word, plus index overhead. * Query Speed: O(log n) for indexed queries, slower for vector computations. * Limitations: Rigid schema, inefficient for 6D vectors or dynamic vertices. * .hexubp Advantage: Lightweight, file-based (~1-2 KB per word), with JSON flexibility for UBP’s hexagonal model, no database server required.
Vector Databases (e.g., Word2Vec): * Structure: Fixed-dimension vectors (e.g., 300D for semantic embeddings). * Storage: ~2.4 KB per word (300 floats at 8 bytes each). * Query Speed: O(n) for similarity searches, optimized with indexing. * Limitations: Generic embeddings lack UBP-specific dimensions (e.g., resonance, toggles). * .hexubp Advantage: Smaller footprint (~1-2 KB), with domain-specific dimensions tailored to UBP’s theoretical framework.
Graph Databases: * Structure: Nodes and edges for word relationships. * Storage: ~500 bytes per word, plus edge overhead. * Query Speed: O(k) for traversals, where k is edge count. * Limitations: Overkill for dictionary tasks, complex setup. * .hexubp Advantage: Self-contained hexagonal structure per word, simpler for UBP’s needs, with comparable storage (~1-2 KB).
The .hexubp format balances storage efficiency, flexibility, and UBP-s...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data dictionary for the dataset used to conduct the quantitative analysis comparing BIVA and anthropometry measurements
Facebook
TwitterMonthly and annual U and V vectors were summarized for 14 unique depth levels from daily means using the HYCOM and NCODA Global 1/12-degree Reanalysis. The U vector (m/s) is to the East and the V vector (m/s) is to the North. Current magnitude (m/s) was calculated using the daily U and V vectors. Descriptive statistics of mean, variance, standard deviations, minimum, and maximum were calculated for each month from the twenty years of data using the daily means (1992-2012). Mean, variance, and standard deviation was calculated for the annual summary period (1992-2012). The mean direction in degrees (with 0 = North) was calculated from the summarized U and V vector means, and represents the direction that the current is moving toward. The 1/12-degree global HYCOM+NCODA Ocean Reanalysis was funded by the U.S. Navy and the Modeling and Simulation Coordination Office. Computer time was made available by the DoD High Performance Computing Modernization Program. The output is publicly available at http://hycom.org.
Facebook
TwitterAttribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
This work is a attempt to describe various braches of mathematics and the analogies betwee them. Namely: 1) Symbolic Analogic 2) Lateral Algebraic Expressions 3) Calculus of Infin- ity Tensors Energy Number Synthesis 4) Perturbations in Waves of Calculus Structures (Group Theory of Calculus) 5) Algorithmic Formation of Symbols (Encoding Algorithms) The analogies between each of the branches (and most certainly other branches) of mathematics form, ”logic vectors.” Forming vector statements of logical analogies and semantic connections between the differentiated branches of math- ematics is useful. It’s useful, because it gives us a linguistic notation from which we can derive other insights. These combined insights from the logical vector space connections yield a combination of Numeric Energy and the logic space. Thus, I have derived and notated many of the most useful tangent ideas from which even more correlations and connections ca be drawn. Using AI, these branches can be used to form even more connections through training of lan- guage engines on the derived models. Through the vector logic space and the discovery of new sheaf (Limbertwig), vast combinations of novel, mathematical statements are derived. This paves the way for an AGI that is not rigid, but flex- ible, like a Limbertwig. The Limbertwig sheaf is open, meaning it can receive other mathematical logic vectors with different designated meanings (of infi- nite or finite indicated elements). Furthermore, the articulation of these syntax forms evolves language away from imperative statements into a mathematically emotive space. Indeed, shown within, we see how the supramanifold of logic is shared with the supramanifold of space-time mathematically. Developing clean mathematical spaces can help meditation, thought pro- cess, acknowledgment of ideas spoken into that cognitive-spacetime and in turn, methods by which paradoxes can be resolved linguistically. This toolkit should be useful to all in the sciences as well as those bridging the humantities to mathematics. Using our memories as a toolkit to aggregate these ideas breaks down bound- aries between them in a new, exciting way. Merging philosophy and Quantum Mechanics together through the lens of symbolic analogies gives the tools to unravel this mystery of all mysteries. Mathematics thus exists as a bridge al- beit a complex one between the two disciplines, giving life to a composite art of problem-solving. Furthermore, mathematics yields to millions of other applications that are potentially limited only by our imagination. From massive data sets used for predictive analytics to emerging fields in medicine, mathematics is an energy and force at the center of possibilities. The power of mathematics to help manage life exists in its ability to shape and model the world in which we live and interact with one another. In conclusion, mathematics is a powerful tool that creates bridges and con- nections between many disciplines and serves as a powerful form of analytical data consumption. It provides language-rich bridges from which to assemble vast fields of theoretical investigations and create groundbreaking innovations. As we approach new horizons in the technology timeline, mathematics will con- tinue to be a powerful driver of creativity and progress.
Limbertwig
First Table of Contents:
Introduction P. 3 Generalized Double Forward Derivatives P. 5 Generalized Reverse Double Integration P. 7 Real Analysis of Phenomenological Velocity P. 9 Symbolic Analogic P. 20 Anterolateral Algebra 1 P. 22 AnteroLateral Algebra 2 P. 24 Energy Numbers: Numeric Energy Quanta from Apriori Symbolic Analogical Quasi-Quanta P. 30 Semantics In Tensor Calculus: Applications to Set Theory P. 46 Meta-Spatial Calculus (Theory of Group Integration) P. 58 A New Function of Homological Topology P. 69 Quantum Algebraic Homologies P. 74 Gravity Waves and Angular Momentum P. 77 Energy Numbers on the Infinity Tensor P. 80 Star, Circ and Tor Relations P. 83 Algorithm Input Code to Symbolic Representation P. 95 Logic Vectors: A Geometry of Logic P. 99 Oneness to Logic Vectors P. 117 Escapades in Lateral Functors Non-Linear Equations P. 122 Annihilation Logic Mappings P. 136 Infinity Tensors the Riemann Hypothesis P. 142 Green’s Functions of Tensor Calculus for Generalized Strange Attractors Satisfying Riemann’s Hypothesis P. 154 Universal Translator P. 165 Pro-Étale P. 167 Fractal Morphisms, Topological Counting, N-Waves, Congruent Integral Methods, Etc. P. 177 Monte Carlo methods P. 221 Handy Functor Cheat Sheet P. 230 Logic Vector Version 8 P. 235 Universal Laws; Actual k-theory P. 238 Real Numbers A Projective Scheme P. 246 Energy Numbers 2 P. 250 Patternizing Psilocybin in Logic Space P. 259 MescalinE in Logic Space P. 282 Deprogramming Zero V2 P. 290 Novel Branching on Integrals P. 29...
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Effective management of vector-borne plant pathogens often relies on disease-resistant cultivars. While heterogeneity in host resistance and in pathogen population density at the host population level play important and well-recognized roles in epidemiology, the effects of resistance traits on pathogen distribution at the individual host level, and the epidemiological consequences in turn, are poorly understood. Transgenic disease-resistant plants that produce bacterial Diffusible Signaling Factor (DSF) could provide resistance to the vector-borne bacterium Xylella fastidiosa by impeding plant colonization and reducing virulence. However, the effects of constitutive in planta production of DSF on insect vector transmission has remained unresolved. We investigated the transmission biology of X. fastidiosa in DSF and wild-type (WT) grapevines with the efficient vector Graphocephala atropunctata. We also developed a novel Bayesian hierarchical model to improve statistical inference on the multiple components of the vector transmission process. We found that insect vectors had a greater colonization efficiency on DSF plants—meaning they acquired a greater population size of X. fastidiosa—than on WT plants. However, DSF plants also maintained much lower X. fastidiosa populations. These apparently conflicting processes resulted in a lower but highly variable probability of transmission from DSF plants compared to WT plants. Our Bayesian model improved statistical inference compared to widely used frequentist statistics in part by estimating and correcting for imperfect detection of X. fastidiosa in plant and insect tissues. Overall, our results support current models on the roles that DSF plays in vector transmission of X. fastidiosa. In line with our hypothesis, DSF production reduced mean X. fastidiosa population density but increased heterogeneity within host plants. While DSF-producing plants could potentially improve disease management, our results suggest that they could, under some conditions, facilitate X. fastidiosa spread.
Facebook
TwitterThis is a restricted dataset and this download is available to NIMA users only.
A mid scale vector product containing the names of rivers, streams, lakes, ponds and reservoirs.
Users outside of the Spatial NI Portal please use Resource Locator 2.
Facebook
TwitterThe Port Statistical Areas dataset was updated on June 05, 2025 from the United States Army Corp of Engineers (USACE) and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics (BTS) National Transportation Atlas Database (NTAD). USACE works with port authorities from across the United States to develop the statistical port boundaries through an iterative and collaborative process. Port boundary information is prepared by USACE to increase transparency on public waterborne commerce statistic reporting, as well as to modernize how the data type is stored, analyzed, and reported. A Port Statistical Area (PSA) is a region with formally justified shared economic interests and collective reliance on infrastructure related to waterborne movements of commodities that is formally recognized by legislative enactments of state, county, or city governments. PSAs generally contain groups of county legislation for the sole purpose of statistical reporting. Through GIS mapping, legislative boundaries, and stakeholder collaboration, PSAs often serve as the primary unit for aggregating and reporting commerce statistics for broader geographical areas. Per Engineering Regulation 1130-2-520, the U.S. Army Corps of Engineers' Navigation Data Center is responsible to collect, compile, publish, and disseminate waterborne commerce statistics. This task has subsequently been charged to the Waterborne Commerce Statistics Center to perform. Performance of this work is in accordance with the Rivers and Harbors Appropriation Act of 1922. Included in this work is the definition of a port area. A port area is defined in Engineering Pamphlet 1130-2-520 as: (1) Port limits defined by legislative enactments of state, county, or city governments. (2) The corporate limits of a municipality. The USACE enterprise-wide port and port statistical area feature classes per EP 1130-2-520 are organized in SDSFIE 4.0.2 format. A data dictionary, or other source of attribute information, is accessible at https://doi.org/10.21949/2ngc-4984
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study proposes an experimental method to trace the historical evolution of media discourse as a means to investigate the construction of collective meaning. Based on distributional semantics theory (Harris, 1954; Firth, 1957) and critical discourse theory (Wodak and Fairclough, 1997), it explores the value of merging two techniques widely employed to investigate language and meaning in two separate fields: neural word embeddings (computational linguistics) and the discourse-historical approach (DHA; Reisigl and Wodak, 2001) (applied linguistics). As a use case, we investigate the historical changes in the semantic space of public discourse of migration in the United Kingdom, and we use the Times Digital Archive (TDA) from 1900 to 2000 as dataset. For the computational part, we use the publicly available TDA word2vec models1 (Kenter et al., 2015; Martinez-Ortiz et al., 2016); these models have been trained according to sliding time windows with the specific intention to map conceptual change. We then use DHA to triangulate the results generated by the word vector models with social and historical data to identify plausible explanations for the changes in the public debate. By bringing the focus of the analysis to the level of discourse, with this method, we aim to go beyond mapping different senses expressed by single words and to add the currently missing sociohistorical and sociolinguistic depth to the computational results. The study rests on the foundation that social changes will be reflected in changes in public discourse (Couldry, 2008). Although correlation does not prove direct causation, we argue that historical events, language, and meaning should be considered as a mutually reinforcing cycle in which the language used to describe events shapes explicit meanings, which in turn trigger other events, which again will be reflected in the public discourse.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The National Transit Map - Routes dataset was compiled on June 02, 2025 from the Bureau of Transportation Statistics (BTS) and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics (BTS) National Transportation Atlas Database (NTAD). The National Transit Map (NTM) is a nationwide catalog of fixed-guideway and fixed-route transit service in America. It is compiled using General Transit Feed Specification (GTFS) Schedule data. The NTM Routes dataset shows transit routes, which is a group of trips that are displayed to riders as a single service. To display the route alignment and trips for each route, this dataset combines the following GTFS files: routes.txt, trips.txt, and shapes.txt. The GTFS Schedule documentation is available at, https://gtfs.org/schedule/. To improve the spatial accuracy of the NTM Routes, the Bureau of Transportation Statistics (BTS) adjusts transit routes using context from the submitted GTFS source data and/or from other publicly available information about the transit service. A data dictionary, or other source of attribute information, is accessible at https://doi.org/10.21949/1529048