CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This service provides data implemented for the INSPIRE topic of transport networks from the OKSTRA data model:A classification based on the function of road on the road network.
Success.ai’s Firmographic Data API empowers organizations to make data-driven decisions with on-demand access to detailed insights on over 70 million companies worldwide. Covering key firmographic attributes like industry classifications, revenue size, and employee count, this API ensures your market analysis, strategic planning, and competitive benchmarking efforts are backed by continuously updated, AI-validated information.
Whether you’re exploring new markets, refining your product offerings, or optimizing partner relationships, Success.ai’s Firmographic Data API delivers the intelligence you need. Supported by our Best Price Guarantee, this solution helps you confidently navigate the global business landscape.
Why Choose Success.ai’s Firmographic Data API?
Detailed, Verified Firmographic Data
Extensive Global Coverage
Continuous Data Updates
Ethical and Compliant
Data Highlights:
Key Features of the Firmographic Data API:
Real-Time Company Enrichment
Advanced Filtering and Query Capabilities
Scalability and Flexibility
AI-Validated Accuracy and Reliability
Strategic Use Cases:
Market Analysis and Competitive Benchmarking
Strategic Partnering and M&A Efforts
Sales and Account-Based Marketing
Product Roadmapping and Portfolio Management
Why Choose Success.ai?
Best Price Guarantee
Seamless Integration
Data Accuracy with AI Validation
Customizable and Scalable Solutions
Additional APIs for Enhanced Functionality:
DomainIQ is a comprehensive global Domain Name dataset for organizations that want to build cyber security, data cleaning and email marketing applications. The dataset consists of the DNS records for over 267 million domains, updated daily, representing more than 90% of all public domains in the world.
The data is enriched by over thirty unique data points, including identifying the mailbox provider for each domain and using AI based predictive analytics to identify elevated risk domains from both a cyber security and email sending reputation perspective.
DomainIQ from Datazag offers layered intelligence through a highly flexible API and as a dataset, available for both cloud and on-premises applications. Standard formats include CSV, JSON, Parquet, and DuckDB.
Custom options are available for any other file or database format. With daily updates and constant research from Datazag, organizations can develop their own market leading cyber security, data cleaning and email marketing applications supported by comprehensive and accurate data from Datazag. Data updates available on a daily, weekly and monthly basis. API data is updated on a daily basis.
Roadway Administrative Classification (State Classifications) data consists of linear geometric features which specifically show State highways included in the State Primary and State Secondary systems throughout Maryland. Roadway Administrative Classification is primarily used for general planning and funding purposes by showcasing the State Primary vs. State Secondary systems. The Maryland Department of Transportation State Highway Administration (MDOT SHA) currently reports this data only on the inventory direction (generally North or East) side of the roadway. Roadway Administrative Classification is not a complete representation of all roadway geometry.Roadway Administrative Classification data is developed and maintained by the Maryland Department of Transportation State Highway Administration (MDOT SHA), under the Office of Planning and Preliminary Engineering (OPPE) Data Services Division (DSD). Roadway Administrative Classification data is used by various business units throughout MDOT, as well as many other Federal, State and local government agencies. Roadway Administrative Classification data is key to understanding which State highways are included in the State Primary and State Secondary systems throughout Maryland.Roadway Administrative Classification data is updated and published on an annual basis for the prior year. This data is for the year 2017. For additional information, contact the MDOT SHA Geospatial Technologies Team: Email: GIS@mdot.state.md.usFor additional MDOT information, visit the Maryland Department of Transportation (MDOT): Website: https://www.mdot.maryland.gov/For additional MDOT SHA information visit the Maryland Department of Transportation State Highway Administration (MDOTSHA): Website: https://www.roads.maryland.gov/Home.aspx\MDOT SHA Geospatial Data Legal Disclaimer:The Maryland Department of Transportation State Highway Administration (MDOT SHA) makes no warranty, expressed or implied, as to the use or appropriateness of geospatial data, and there are no warranties of merchantability or fitness for a particular purpose or use. The information contained in geospatial data is from publicly available sources, but no representation is made as to the accuracy or completeness of geospatial data. MDOT SHA shall not be subject to liability for human error, error due to software conversion, defect, or failure of machines, or any material used in the connection with the machines, including tapes, disks, CD-ROMs or DVD-ROMs and energy. MDOT SHA shall not be liable for any lost profits, consequential damages, or claims against MDOT SHA by third parties.This is a MD iMAP hosted service. Find more information on https://imap.maryland.gov.Feature Service Link:https://mdgeodata.md.gov/imap/rest/services/Transportation/MD_RoadwayAdministrativeClassification/FeatureServer/0
The Census data API provides access to the most comprehensive set of data on current month and cumulative year-to-date imports using the North American Industry Classification System (NAICS). The NAICS endpoint in the Census data API also provides value, shipping weight, and method of transportation totals at the district level for all U.S. trading partners. The Census data API will help users research new markets for their products, establish pricing structures for potential export markets, and conduct economic planning. If you have any questions regarding U.S. international trade data, please call us at 1(800)549-0595 option #4 or email us at eid.international.trade.data@census.gov.
Esri ArcGIS Online (AGOL) Hosted Feature Layer which provides access to the MDOT SHA Roadway Functional Classification data product.MDOT SHA Roadway Functional Classification data consists of linear geometric features which showcase the functional classification of roadways throughout the State of Maryland. Roadway Functional Classification is defined as the role each roadway plays in moving vehicles throughout a network of highways. MDOT SHA Roadway Functional Classification data is primarily used for general planning purposes, and for Federal Highway Administration (FHWA) Highway Performance Monitoring System (HPMS) annual submission & coordination. The Maryland Department of Transportation State Highway Administration (MDOT SHA) currently reports this data only on the inventory direction (generally North or East) side of the roadway. MDOT SHA Roadway Functional Classification data is not a complete representation of all roadway geometry.The State of Maryland's roadway system is a vast network that connects places and people within and across county borders. Planners and engineers have developed elements of this network with particular travel objectives in mind. These objectives range from serving long-distance passenger and freight needs to serving neighborhood travel from residential developments to nearby shopping centers. The functional classification of roadways defines the role each element of the roadway network plays in serving these travel needs. Over the years, functional classification has come to assume additional significance beyond its purpose as a framework for identifying the particular role of a roadway in moving vehicles through a network of highways. Functional classification carries with it expectations about roadway design, including its speed, capacity and relationship to existing and future land use development. Federal legislation continues to use functional classification in determining eligibility for funding under the Federal-aid program. Transportation agencies describe roadway system performance, benchmarks and targets by functional classification. As agencies continue to move towards a more performance-based management approach, functional classification will be an increasingly important consideration in setting expectations and measuring outcomes for preservation, mobility and safety.MDOT SHA Roadway Functional Classification data is developed as part of the Highway Performance Monitoring System (HPMS) which maintains and reports transportation related information to the Federal Highway Administration (FHWA) on an annual basis. HPMS is maintained by the Maryland Department of Transportation State Highway Administration (MDOT SHA), under the Office of Planning & Preliminary Engineering (OPPE) Data Services Division (DSD). This data is used by various business units throughout MDOT, as well as many other Federal, State and local government agencies. Roadway Functional Classification data is key to understanding the role each roadway plays in moving vehicles throughout the State of Maryland's network of highways.MDOT SHA Roadway Functional Classification data is owned & maintained by the MDOT SHA Office of Planning & Preliminary Engineering (OPPE). This data product is updated & published on an annual basis for the prior year. This data product is for the year 2023.For more information related to the data, contact MDOT SHA OPPE Data Services Division (DSD):Email: DSD@mdot.maryland.govFor more information, contact MDOT SHA OIT Enterprise Information Services:Email: GIS@mdot.maryland.gov
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
HSL OpenMaaS is an open-for-all ticket sales interface for acquiring HSL mobile tickets. The goal is to have all HSL’s mobile ticket products available via this API. The OpenMaaS API is being continuously developed with new technical features, ticket types and payment options.
As an OpenMaaS operator you can integrate into the HSL OpenMaaS API and create a platform through which you can make HSL mobile tickets available to your own customers. In the portal you can sign up and log in to manage your organizations’s API keys and payment details. The API provides endpoints for fetching available ticket types, passenger types, validity regions as well as for purchasing a ticket. To display the bought tickets, you have two options:
Base Data - Name/Brand - Address - Geocoordinates - Opening Hours - Phone - ...
25+ Fuel Types - Super E5 - Super 98 - Diesel - AdBlue - LPG - CNG - ...
60+ Services and Characteristics - Car Wash - Shop - Restaurant - Toilet - ATM - Toll - ...
300+ Payment Options - Cash - Visa - MasterCard - Fuel Cards - ...
Xavvy is the leading source for gas station location and petrol price data worldwide, specializing in data quality and enrichment. We provide high-quality POI data for gas stations across all European countries, integrated with energy data, places data, automotive data, commodity data, market research data, oil and gas data, and brand data.
Our gas station location data is delivered country by country, with customizable information levels. We offer one-time or regular data delivery, push or pull services, and any data format to meet customer needs.
Our data answers critical questions such as the total number of stations per country or region, market share distribution, and optimal locations for new gas stations, charging stations, or hydrogen dispensers. This information provides a solid foundation for in-depth analyses, enabling various industries to gain valuable insights into the fuel market and its trends. Our data supports strategic decisions in business development, competitive approaches, and expansion.
Additionally, our data enhances the consistency and quality of existing datasets. Users can easily map data to check for accuracy and correct errors.
With over 200 sources, including governments, petroleum companies, fuel card providers, and crowd sourcing, Xavvy offers comprehensive information. Alongside base data like name/brand, address, geo-coordinates, and opening hours, we provide detailed insights into available fuel types, accessibility, special services, and payment options for each station.
High data quality is crucial for delivering an excellent customer experience, especially when displaying gas station information on maps or applications. We continuously enhance our processing procedures to improve data quality through:
Explore our other data offerings and gain valuable market insights on gas stations directly from the experts!
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The Reference Data as a Service (RDaaS) API provides a list of codesets, classifications, and concordances that are used within Statistics Canada. These resources are shared to help harmonize data, enabling better interdepartmental data integration and analysis. This dataset provides an updated version of the StatCan RDaaS API specification, originally part of the Government of Canada’s GC API Store, which permanently closed on September 29th, 2023. The archived version of the original API specification can be accessed via the Wayback Machine . The specification has been updated to the OpenAPI 3.0 (Swagger 3) standard, enabling use of current tools and features for API exploration and integration. Key interactive features of the updated specification include: * Try-It-Out Functionality: Allows a user to interact with API endpoints directly from the documentation in their browser, submitting test requests and viewing live responses. * Interactive Parameter Input: Simplifies experimentation with filters and parameters to explore API behavior. * Schema Visualization: Provides clear representations of request and response structures.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Elementary services according to ISCED-2011 (International Standard Classification of Education, 2011) Level 0. This topic provides information about all children’s day care facilities in NRW. Elementary services according to ISCED-2011 (International Standard Classification of Education, 2011) Level 0. This topic provides information about all children’s day care facilities in NRW.
The Census data API provides access to the most comprehensive set of data on current month and cumulative year-to-date imports using the Standard International Trade Classification (SITC) system. The SITC endpoint in the Census data API also provides value, shipping weight, and method of transportation totals at the district level for all U.S. trading partners. The Census data API will help users research new markets for their products, establish pricing structures for potential export markets, and conduct economic planning. If you have any questions regarding U.S. international trade data, please call us at 1(800)549-0595 option #4 or email us at eid.international.trade.data@census.gov.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In the context of the implementation of the EU Water Framework Directive, the first and further description of the groundwater bodies of Rhineland-Palatinate represents an inventory of the subsoil of the river basins in Rhineland-Palatinate with the aim of recording those groundwater bodies for which there is a risk of not achieving the environmental objectives under Article 4 of the EU Water Framework Directive. The description is based on the Hydrogeological Overview Map of Germany (HÜK 200). It was established in 2001 by the State Geological Services of Germany (SGD) and the Federal Institute for Geosciences and Natural Resources (BGR) on a scale of 1: 200,000 in the sheet cuts of the TK 200. The contents correspond to HÜK 200 of the BGR. :The classification of the upper aquifer into aquifer types was based on the cavity type and the geochemical nature of the flowing aquifer (combination of attributes). Geochemical conditions in the leachate zone are not taken into account. The contents correspond to HÜK 200 of the BGR.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In recent years, with the development of the Internet, the attribution classification of APT malware remains an important issue in society. Existing methods have yet to consider the DLL link library and hidden file address during the execution process, and there are shortcomings in capturing the local and global correlation of event behaviors. Compared to the structural features of binary code, opcode features reflect the runtime instructions and do not consider the issue of multiple reuse of local operation behaviors within the same APT organization. Obfuscation techniques more easily influence attribution classification based on single features. To address the above issues, (1) an event behavior graph based on API instructions and related operations is constructed to capture the execution traces on the host using the GNNs model. (2) ImageCNTM captures the local spatial correlation and continuous long-term dependency of opcode images. (3) The word frequency and behavior features are concatenated and fused, proposing a multi-feature, multi-input deep learning model. We collected a publicly available dataset of APT malware to evaluate our method. The attribution classification results of the model based on a single feature reached 89.24% and 91.91%. Finally, compared to single-feature classifiers, the multi-feature fusion model achieves better classification performance.
Schools including school types, contact details, further factual information and, if applicable, school districts (see specifications at https://www.gdi-suedhessen.de/fachthemen/pflichtenhefte/). Provided via the platform www.gdi-inspireumsetzer.de - A service of the GDI South Hesse.:
Overview
This dataset of medical misinformation was collected and is published by Kempelen Institute of Intelligent Technologies (KInIT). It consists of approx. 317k news articles and blog posts on medical topics published between January 1, 1998 and February 1, 2022 from a total of 207 reliable and unreliable sources. The dataset contains full-texts of the articles, their original source URL and other extracted metadata. If a source has a credibility score available (e.g., from Media Bias/Fact Check), it is also included in the form of annotation. Besides the articles, the dataset contains around 3.5k fact-checks and extracted verified medical claims with their unified veracity ratings published by fact-checking organisations such as Snopes or FullFact. Lastly and most importantly, the dataset contains 573 manually and more than 51k automatically labelled mappings between previously verified claims and the articles; mappings consist of two values: claim presence (i.e., whether a claim is contained in the given article) and article stance (i.e., whether the given article supports or rejects the claim or provides both sides of the argument).
The dataset is primarily intended to be used as a training and evaluation set for machine learning methods for claim presence detection and article stance classification, but it enables a range of other misinformation related tasks, such as misinformation characterisation or analyses of misinformation spreading.
Its novelty and our main contributions lie in (1) focus on medical news article and blog posts as opposed to social media posts or political discussions; (2) providing multiple modalities (beside full-texts of the articles, there are also images and videos), thus enabling research of multimodal approaches; (3) mapping of the articles to the fact-checked claims (with manual as well as predicted labels); (4) providing source credibility labels for 95% of all articles and other potential sources of weak labels that can be mined from the articles' content and metadata.
The dataset is associated with the research paper "Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims" accepted and presented at ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '22).
The accompanying Github repository provides a small static sample of the dataset and the dataset's descriptive analysis in a form of Jupyter notebooks.
Options to access the dataset
There are two ways how to get access to the dataset:
In order to obtain an access to the dataset (either to full static dump or REST API), please, request the access by following instructions provided below.
References
If you use this dataset in any publication, project, tool or in any other form, please, cite the following papers:
@inproceedings{SrbaMonantPlatform, author = {Srba, Ivan and Moro, Robert and Simko, Jakub and Sevcech, Jakub and Chuda, Daniela and Navrat, Pavol and Bielikova, Maria}, booktitle = {Proceedings of Workshop on Reducing Online Misinformation Exposure (ROME 2019)}, pages = {1--7}, title = {Monant: Universal and Extensible Platform for Monitoring, Detection and Mitigation of Antisocial Behavior}, year = {2019} }
@inproceedings{SrbaMonantMedicalDataset, author = {Srba, Ivan and Pecher, Branislav and Tomlein Matus and Moro, Robert and Stefancova, Elena and Simko, Jakub and Bielikova, Maria}, booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '22)}, numpages = {11}, title = {Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims}, year = {2022}, doi = {10.1145/3477495.3531726}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3477495.3531726}, }
Dataset creation process
In order to create this dataset (and to continuously obtain new data), we used our research platform Monant. The Monant platform provides so called data providers to extract news articles/blogs from news/blog sites as well as fact-checking articles from fact-checking sites. General parsers (from RSS feeds, Wordpress sites, Google Fact Check Tool, etc.) as well as custom crawler and parsers were implemented (e.g., for fact checking site Snopes.com). All data is stored in the unified format in a central data storage.
Ethical considerations
The dataset was collected and is published for research purposes only. We collected only publicly available content of news/blog articles. The dataset contains identities of authors of the articles if they were stated in the original source; we left this information, since the presence of an author's name can be a strong credibility indicator. However, we anonymised the identities of the authors of discussion posts included in the dataset.
The main identified ethical issue related to the presented dataset lies in the risk of mislabelling of an article as supporting a false fact-checked claim and, to a lesser extent, in mislabelling an article as not containing a false claim or not supporting it when it actually does. To minimise these risks, we developed a labelling methodology and require an agreement of at least two independent annotators to assign a claim presence or article stance label to an article. It is also worth noting that we do not label an article as a whole as false or true. Nevertheless, we provide partial article-claim pair veracities based on the combination of claim presence and article stance labels.
As to the veracity labels of the fact-checked claims and the credibility (reliability) labels of the articles' sources, we take these from the fact-checking sites and external listings such as Media Bias/Fact Check as they are and refer to their methodologies for more details on how they were established.
Lastly, the dataset also contains automatically predicted labels of claim presence and article stance using our baselines described in the next section. These methods have their limitations and work with certain accuracy as reported in this paper. This should be taken into account when interpreting them.
Reporting mistakes in the dataset The mean to report considerable mistakes in raw collected data or in manual annotations is by creating a new issue in the accompanying Github repository. Alternately, general enquiries or requests can be sent at info [at] kinit.sk.
Dataset structure
Raw data
At first, the dataset contains so called raw data (i.e., data extracted by the Web monitoring module of Monant platform and stored in exactly the same form as they appear at the original websites). Raw data consist of articles from news sites and blogs (e.g. naturalnews.com), discussions attached to such articles, fact-checking articles from fact-checking portals (e.g. snopes.com). In addition, the dataset contains feedback (number of likes, shares, comments) provided by user on social network Facebook which is regularly extracted for all news/blogs articles.
Raw data are contained in these CSV files (and corresponding REST API endpoints):
sources.csv
articles.csv
article_media.csv
article_authors.csv
discussion_posts.csv
discussion_post_authors.csv
fact_checking_articles.csv
fact_checking_article_media.csv
claims.csv
feedback_facebook.csv
Note: Personal information about discussion posts' authors (name, website, gravatar) are anonymised.
Annotations
Secondly, the dataset contains so called annotations. Entity annotations describe the individual raw data entities (e.g., article, source). Relation annotations describe relation between two of such entities.
Each annotation is described by the following attributes:
category of annotation (annotation_category
). Possible values: label (annotation corresponds to ground truth, determined by human experts) and prediction (annotation was created by means of AI method).
type of annotation (annotation_type_id
). Example values: Source reliability (binary), Claim presence. The list of possible values can be obtained from enumeration in annotation_types.csv.
method which created annotation (method_id
). Example values: Expert-based source reliability evaluation, Fact-checking article to claim transformation method. The list of possible values can be obtained from enumeration methods.csv.
its value (value
). The value is stored in JSON format and its structure differs according to particular annotation type.
At the same time, annotations are associated with a particular object identified by:
entity type (parameter entity_type
in case of entity annotations, or source_entity_type
and target_entity_type
in case of relation annotations). Possible values: sources, articles, fact-checking-articles.
entity id (parameter entity_id
in case of entity annotations, or source_entity_id
and target_entity_id
in case of relation annotations).
The dataset provides specifically these entity annotations:
Source reliability (binary). Determines validity of source (website) at a binary scale with two options: reliable source and unreliable source.
Article veracity. Aggregated information about veracity from article-claim pairs.
The dataset provides specifically these relation annotations:
Fact-checking article to claim mapping. Determines mapping between fact-checking article and claim.
Claim presence. Determines presence of claim in article.
Claim stance. Determines stance of an article to a claim.
Annotations are contained in these CSV files (and corresponding REST API endpoints):
entity_annotations.csv
relation_annotations.csv
Note: Identification of human annotators authors (email provided in the annotation app) is anonymised.
Success.ai’s B2B Marketing Data API empowers marketing and sales teams to execute highly targeted and effective outreach campaigns. By providing on-demand access to over 70 million detailed business profiles worldwide, this API ensures your strategies are always guided by accurate, up-to-date information. From industry classifications and employee counts to firmographic and demographic insights, Success.ai’s B2B Marketing Data API enables you to zero in on the right businesses and decision-makers.
With robust filtering capabilities, continuously updated datasets, and AI-validated accuracy, you can confidently refine segments, tailor messaging, and drive higher engagement rates. Backed by our Best Price Guarantee, this solution is essential for achieving meaningful ROI in a competitive global marketplace.
Why Choose Success.ai’s B2B Marketing Data API?
Extensive Global Coverage
AI-Validated Accuracy
Robust Filtering Capabilities
Ethical and Compliant
Data Highlights:
Key Features of the B2B Marketing Data API:
On-Demand Data Enrichment
Flexible Integration Options
Granular Segmentation and Targeting
Real-Time Validation and Reliability
Strategic Use Cases:
Account-Based Marketing (ABM)
Market Expansion and Product Launches
Partnership Development and Channel Sales
Competitive Benchmarking and Market Research
Why Choose Success.ai?
Best Price Guarantee
Seamless Integration
Data Accuracy with AI Validation
Customizable and Scalable Solutions
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The flood hazard map HQ Extreme shows the extent of floods (flood areas and water depth) in the event of events that, on statistical average, can occur much less frequently than every 100 years, i.e. a low-probability flood scenario. The water depth is shown in the hazard maps in five stages with different shades of blue. The same levels in shades of yellow characterize areas behind flood protection systems. This is intended to draw attention to the residual risk behind dams and dikes. Attributes: CLASS: Depth class (1 - 5, in protected areas 10 - 15) DEPTH: Depth class (text description) WATER: Water name GEWKZ: Water identification number according to LAWA; Scale limitation: Min not applicable, Max 1:3000.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Map Service (WFS Group) provides the map bases of the Land Development Plan Environment (2004) and Settlement (2006) of the Saarland.:Strong generalised representation of the space categories Core zone of the compaction space, edge zone of the compaction area and rural space within the framework of the LEP settlement 2006.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/SOZSJAhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/SOZSJA
Harvard Catalyst Profiles is a Semantic Web application, which means its content can be read and understood by other computer programs. This enables the data in profiles, such as addresses and publications, to be shared with other institutions and appear on other websites. If you click the "Export RDF" link on the left sidebar of a profile page, you can see what computer programs see when visiting a profile. The section below describes the technical details for building a computer program that can export data from Harvard Catalyst Profiles. There are four types of application programming interfaces (APIs) in Harvard Catalyst Profiles. RDF crawl. Because Harvard Catalyst Profiles is a Semantic Web application, every profile has both an HTML page and a corresponding RDF document, which contains the data for that page in RDF/XML format. Web crawlers can follow the links embedded within the RDF/XML to access additional content. SPARQL endpoint. SPARQL is a programming language that enables arbitrary queries against RDF data. This provides the most flexibility in accessing data; however, the downsides are the complexity in coding SPARQL queries and performance. In general, the XML Search API (see below) is better to use than SPARQL. However, if you require access to the SPARQL endpoint, please contact Griffin Weber. XML Search API. This is a web service that provides support for the most common types of queries. It is designed to be easier to use and to offer better performance than SPARQL, but at the expense of fewer options. It enables full-text search across all entity types, faceting, pagination, and sorting options. The request message to the web service is in XML format, but the output is in RDF/XML format. The URL of the XML Search API is https://connects.catalyst.harvard.edu/API/Profiles/Public/Search. Old XML based web services. This provides backwards compatibility for institutions that built applications using the older version of Harvard Catalyst Profiles. These web services do not take advantage of many of the new features of Harvard Catalyst Profiles. Users are encouraged to switch to one of the new APIs. The URL of the old XML web service is https://connects.catalyst.harvard.edu/ProfilesAPI. For more information about the APIs, please see the documentation and example files.
Vineyard soils (rigosols) are soils that have been significantly altered by human activity. The soils, which are highly differentiated according to structure and properties, were combined and systematized into guide soil shapes. For a clear map presentation, the guide soil forms are divided into groups of uniform type of formation or rock type.:The fine soil type method is based on basic data from the Rhineland-Palatinate vineyard soil mapping and the classification of the soil mapping instructions (KA5). The fine soil types of the vineyard soil map have been reinterpreted by experts and adapted to the soil types of the KA5 that are common today. The fine soil types are shown in a separate map for the layer Rigolhorizont or the layer Underground. A variability of the characteristic expression in the surface or within a layer is indicated by a hatching.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This service provides data implemented for the INSPIRE topic of transport networks from the OKSTRA data model:A classification based on the function of road on the road network.