Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This list contains the government API cases collected, cleaned and analysed in the APIs4DGov study "Web API landscape: relevant general purpose ICT standards, technical specifications and terms".
The list does not represent a complete list of all government cases in Europe, as it is built to support the goals of the study and is limited to the analysis and data gathered from the following sources:
The EU open data portal
The European data portal
The INSPIRE catalogue
JoinUp: The API cases collected from the European Commission JoinUp platform
Literature-document review: the API cases gathered from the research activities of the study performed till the end of 2019
ProgrammableWeb: the ProgrammableWeb API directory
Smart 2015/0041: the database of 395 cases created by the study ‘The project Towards faster implementation and uptake of open government’ (SMART 2015/0041).
Workshops/meetings/interviews: a list of API cases collected in the workshops, surveys and interviews organised within the APIs4DGov
Each API case is classified accordingly to the following rationale:
Unique id: a unique key of each case, obtained by concatenating the following fields: (Country Code) + (Governmental level) + (Name Id) + (Type of API)
API Country or type of provider: the country in which the API case has been published
API provider: the specific provider that published and maintain the API case
Name Id: an acronym of the name of the API case (it can be not unique)
Short description
Type of API: (i) API registry, a set, catalogue, registry or directory of APIs; (ii) API platform: a platform that supports the use of APIs; (iii) API tool: a tool used to manage APIs; (iv) API standard: a set of standards related to government APIs; (v) Data catalogue, an API published to access metadata of datasets, normally published by a data catalogue; (vi) Specific API, a unique (can have many endpoints) API built for a specific purpose
Number of APIs: normally only one, in the case of API registry, the number of APIs published by the registry at the 31/12/2019
Theme: list of domains related to the API case (controlled vocabulary)
Governmental level: the geographical scope of the API (city, regional, national or international)
Country code: the country two letters internal code
Source: the source (among the ones listed in the previous) from where the API case has been gathered
This dataset lists out all software in use by NASA.
WONDER online databases include county-level Compressed Mortality (death certificates) since 1979; county-level Multiple Cause of Death (death certificates) since 1999; county-level Natality (birth certificates) since 1995; county-level Linked Birth / Death records (linked birth-death certificates) since 1995; state & large metro-level United States Cancer Statistics mortality (death certificates) since 1999; state & large metro-level United States Cancer Statistics incidence (cancer registry cases) since 1999; state and metro-level Online Tuberculosis Information System (TB case reports) since 1993; state-level Sexually Transmitted Disease Morbidity (case reports) since 1984; state-level Vaccine Adverse Event Reporting system (adverse reaction case reports) since 1990; county-level population estimates since 1970. The WONDER web server also hosts the Data2010 system with state-level data for compliance with Healthy People 2010 goals since 1998; the National Notifiable Disease Surveillance System weekly provisional case reports since 1996; the 122 Cities Mortality Reporting System weekly death reports since 1996; the Prevention Guidelines database (book in electronic format) published 1998; the Scientific Data Archives (public use data sets and documentation); and links to other online data sources on the "Topics" page.
Information and links for developers to work with real-time and static transportation data.
The ITA Taxonomies API gives developers direct access to the exporting, trade, and investment terms that ITA uses to tag the content and data in its other APIs. Currently, ITA has three taxonomies: Geographic Regions, Industries, and Topics. This API includes all terms in their proper hierarchy in the relevant taxonomy.ITA imports data for its other APIs from many sources. If the source data is already tagged, ITA does the following: - Imports those tags (terms) along with the data - Maps the terms to ITA’s taxonomies - Publishes both the original terms and the ITA terms with the data in the API.The output format for this API is JSON. This data set is updated hourly.
OpenWeb Ninja's Google Images Data (Google SERP Data) API provides real-time image search capabilities for images sourced from all public sources on the web.
The API enables you to search and access more than 100 billion images from across the web including advanced filtering capabilities as supported by Google Advanced Image Search. The API provides Google Images Data (Google SERP Data) including details such as image URL, title, size information, thumbnail, source information, and more data points. The API supports advanced filtering and options such as file type, image color, usage rights, creation time, and more. In addition, any Advanced Google Search operators can be used with the API.
OpenWeb Ninja's Google Images Data & Google SERP Data API common use cases:
Creative Media Production: Enhance digital content with a vast array of real-time images, ensuring engaging and brand-aligned visuals for blogs, social media, and advertising.
AI Model Enhancement: Train and refine AI models with diverse, annotated images, improving object recognition and image classification accuracy.
Trend Analysis: Identify emerging market trends and consumer preferences through real-time visual data, enabling proactive business decisions.
Innovative Product Design: Inspire product innovation by exploring current design trends and competitor products, ensuring market-relevant offerings.
Advanced Search Optimization: Improve search engines and applications with enriched image datasets, providing users with accurate, relevant, and visually appealing search results.
OpenWeb Ninja's Annotated Imagery Data & Google SERP Data Stats & Capabilities:
100B+ Images: Access an extensive database of over 100 billion images.
Images Data from all Public Sources (Google SERP Data): Benefit from a comprehensive aggregation of image data from various public websites, ensuring a wide range of sources and perspectives.
Extensive Search and Filtering Capabilities: Utilize advanced search operators and filters to refine image searches by file type, color, usage rights, creation time, and more, making it easy to find exactly what you need.
Rich Data Points: Each image comes with more than 10 data points, including URL, title (annotation), size information, thumbnail, and source information, providing a detailed context for each image.
BatchData is used by lead generation, product, operations, and acquisitions teams to power websites, fuel applications, build lists, enrich data, and improve data governance. A suite of APIs and self-service list building platforms provide access to 150M+ residential properties.
Residential Real Estate Data includes: - Property Address Information - Assessment Details - Building Characteristics - Demographics - Foreclosure - Occupancy/Vacancy - Involuntary Liens - MLS & Agent Arrays - Owner Names & Mailing Address - Property Owner Profiles - Current & Prior Sales - Tax Information - Valuation & Equity
Real Estate Data APIs include: - Residential Property Search - Residential Property Lookup - Residential Address Verification - Residential Property Skip Trace - Geocoding
BatchData's robust data science team curates over a dozen primary and secondary tier 1 data sources to offer unparalleled database depth, accuracy, and completeness.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘List of government APIs’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from http://data.europa.eu/88u/dataset/45ca8d82-ac31-4360-b3a1-ba43b0b07377 on 11 January 2022.
--- Dataset description provided by original source is as follows ---
This list contains the government API cases collected, cleaned and analysed in the APIs4DGov study "Web API landscape: relevant general purpose ICT standards, technical specifications and terms".
The list does not represent a complete list of all government cases in Europe, as it is built to support the goals of the study and is limited to the analysis and data gathered from the following sources:
The EU open data portal
The European data portal
The INSPIRE catalogue
JoinUp: The API cases collected from the European Commission JoinUp platform
Literature-document review: the API cases gathered from the research activities of the study performed till the end of 2019
ProgrammableWeb: the ProgrammableWeb API directory
Smart 2015/0041: the database of 395 cases created by the study ‘The project Towards faster implementation and uptake of open government’ (SMART 2015/0041).
Workshops/meetings/interviews: a list of API cases collected in the workshops, surveys and interviews organised within the APIs4DGov
Each API case is classified accordingly to the following rationale:
Unique id: a unique key of each case, obtained by concatenating the following fields: (Country Code) + (Governmental level) + (Name Id) + (Type of API)
API Country or type of provider: the country in which the API case has been published
API provider: the specific provider that published and maintain the API case
Name Id: an acronym of the name of the API case (it can be not unique)
Short description
Type of API: (i) API registry, a set, catalogue, registry or directory of APIs; (ii) API platform: a platform that supports the use of APIs; (iii) API tool: a tool used to manage APIs; (iv) API standard: a set of standards related to government APIs; (v) Data catalogue, an API published to access metadata of datasets, normally published by a data catalogue; (vi) Specific API, a unique (can have many endpoints) API built for a specific purpose
Number of APIs: normally only one, in the case of API registry, the number of APIs published by the registry at the 31/12/2019
Theme: list of domains related to the API case (controlled vocabulary)
Governmental level: the geographical scope of the API (city, regional, national or international)
Country code: the country two letters internal code
Source: the source (among the ones listed in the previous) from where the API case has been gathered
--- Original source retains full ownership of the source dataset ---
This API provides international data on energy sources (e.g., coal, electricity, natural gas, petroleum, coal, renewables) and activities (e.g., consumption, imports, exports, carbon emissions, prices, production). Users of the EIA API are required to obtain an API Key via this registration form: http://www.eia.gov/beta/api/register.cfm
Xavvy fuel is the leading source for location data and market insights worldwide. We specialize in data quality and enrichment, providing high-quality POI data for restaurants and quick-service establishments in the United States.
Base data • Name/Brand • Adress • Geocoordinates • Opening Hours • Phone • ... ^
30+ Services • Delivery • Wifi • ChargePoints • …
10+ Payment options • Visa • MasterCard • Google Pay • individual Apps • ...
Our data offering is highly customizable and flexible in delivery – whether one-time or regular data delivery, push or pull services, and various data formats – we adapt to our customers' needs.
Brands included: • McDonalds • Burger King • Subway • KFC • Wendy's • ...
The total number of restaurants per region, market share distribution among competitors, or the ideal location for new branches – our restaurant data provides valuable insights into the food service market and serves as the perfect foundation for in-depth analyses and statistics. Our data helps businesses across various industries make informed decisions regarding market development, expansion, and competitive strategies. Additionally, our data contributes to the consistency and quality of existing datasets. A simple data mapping allows for accuracy verification and correction of erroneous entries.
Especially when displaying information about restaurants and fast-food chains on maps or in applications, high data quality is crucial for an optimal customer experience. Therefore, we continuously optimize our data processing procedures: • Regular quality controls • Geocoding systems to refine location data • Cleaning and standardization of datasets • Consideration of current developments and mergers • Continuous expansion and cross-checking of various data sources
Integrate the most comprehensive database of restaurant locations in the USA into your business. Explore our additional data offerings and gain valuable market insights directly from the experts!
Given a known property address (input) BatchData Property Data Lookup API instantly returns information on the property, ownership, property listings data, and transactional history.
BatchData's robust data science team curates over a dozen primary and secondary tier 1 data sources to offer unparalleled database depth, accuracy, and completeness.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global Sports Data API Interface market is experiencing robust growth, driven by the increasing popularity of sports betting, fantasy sports, and the broader digitalization of the sports industry. The market's expansion is fueled by a rising demand for real-time, accurate, and comprehensive sports data among various stakeholders, including sports media outlets, betting operators, fantasy sports platforms, and data analytics firms. Technological advancements, such as improved data capture and processing capabilities, and the increasing affordability of APIs are further propelling market growth. Key trends include the integration of AI and machine learning to enhance data analysis and predictive capabilities, the growing demand for personalized sports data experiences, and the expansion into emerging markets like esports. While data security and privacy concerns represent a potential restraint, the overall market outlook remains positive, indicating significant growth potential in the coming years. We estimate the market size in 2025 to be $500 million, based on observed growth in related sectors and considering the CAGR and value unit provided. Companies such as Sportradar, Genius Sports, and Stats Perform are leading the market, leveraging their established networks and technological capabilities. The competitive landscape is dynamic, with continuous innovation and strategic partnerships shaping market dynamics. Further segmentation by sports type (e.g., football, basketball, baseball) and data type (e.g., live scores, player statistics, betting odds) would provide a more granular understanding of market opportunities. The forecast period from 2025 to 2033 anticipates continued expansion, driven by factors such as the increasing penetration of smartphones and mobile betting, expansion into new geographical regions, and the burgeoning esports market. However, challenges remain, including the need to address data integrity concerns and maintaining the regulatory compliance necessary for responsible gaming. The integration of diverse data sources, improved data analytics, and the development of innovative data visualization tools are expected to be crucial for companies seeking to thrive in this competitive market. Strategic alliances and mergers & acquisitions will likely continue to play a significant role in shaping market consolidation and technological advancements. Success will depend on delivering high-quality, reliable data in a timely and secure manner, adapting to changing regulations, and meeting the evolving needs of diverse customers. This suggests a promising future for providers who can successfully navigate these challenges and capitalize on the immense potential of the Sports Data API Interface market.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains realistic traces of topics released by Chrome's Topics API across 4 weeks. Traces are simulated for 10 million fake users to match differentially private statistics computed on real browsing behavior. Full details of the dataset generation can be found in [1]. The code that generated the dataset can be found here.
[1] Differentially Private Synthetic Data Release for Topics API Outputs, Travis Dick et al. Proceedings of KDD 2025. Toronto, Canada.
This is an application programming interface (API) that opens up core EU legislative data for further use. The interface uses JSON, meaning that you have easy to use machine-readable access to meta data on European Union legislation. It will be useful if you want to use or analyze European Union legislative data in a way that the official databases are not originally build for. The API extracts, organize and connects data from various official sources.
The API is based on the most important official EU-databases (EUR-Lex, PreLex and Council public votes).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Data.public.lu provides all its metadata in the DCAT and DCAT-AP formats, i.e. all data about the data stored or referenced on data.public.lu. DCAT (Data Catalog Vocabulary) is a specification designed to facilitate interoperability between data catalogs published on the Web. This specification has been extended via the DCAT-AP (DCAT Application Profile for data portals in Europe) standard, specifically for data portals in Europe. The serialisation of those vocabularies is mainly done in RDF (Resource Description Framework). The implementation of data.public.lu is based on the one of the open source udata platform. This API enables the federation of multiple Data portals together, for example, all the datasets published on data.public.lu are also published on data.europa.eu. The DCAT API from data.public.lu is used by the european data portal to federate its metadata. The DCAT standard is thus very important to guarantee the interoperability between all data portals in Europe. Usage Full catalog You can find here a few examples using the curl command line tool: To get all the metadata from the whole catalog hosted on data.public.lu curl https://data.public.lu/catalog.rdf Metadata for an organization To get the metadata of a specific organization, you need first to find its ID. The ID of an organization is the last part of its URL. For the organization "Open data Lëtzebuerg" its URL is https://data.public.lu/fr/organizations/open-data-letzebuerg/ and its ID is open-data-letzebuerg. To get all the metadata for a given organization, we need to call the following URL, where {id} has been replaced by the correct ID: https://data.public.lu/api/1/organizations/{id}/catalog.rdf Example: curl https://data.public.lu/api/1/organizations/open-data-letzebuerg/catalog.rdf Metadata for a dataset To get the metadata of a specific dataset, you need first to find its ID. The ID of dataset is the last part of its URL. For the dataset "Digital accessibility monitoring report - 2020-2021" its URL is https://data.public.lu/fr/datasets/digital-accessibility-monitoring-report-2020-2021/ and its ID is digital-accessibility-monitoring-report-2020-2021. To get all the metadata for a given dataset, we need to call the following URL, where {id} has been replaced by the correct ID: https://data.public.lu/api/1/datasets/{id}/rdf Example: curl https://data.public.lu/api/1/datasets/digital-accessibility-monitoring-report-2020-2021/rdf Compatibility with DCAT-AP 2.1.1 The DCAT-AP standard is in constant evolution, so the compatibility of the implementation should be regularly compared with the standard and adapted accordingly. In May 2023, we have done this comparison, and the result is available in the resources below (see document named 'udata 6 dcat-ap implementation status"). In the DCAT-AP model, classes and properties have a priority level which should be respected in every implementation: mandatory, recommended and optional. Our goal is to implement all mandatory classes and properties, and if possible implement all recommended classes and properties which make sense in the context of our open data portal.
ETH Library operates an API platform that facilitates access to our extensive data resources for developers. The Developer Portal provides developers with more efficient and effective ways to access the rich data in our library applications, develop innovative projects, and collaborate with partners both inside and outside ETH Zurich. The Developer Portal is the gateway to the growing number of APIs of the ETH-Bibliothek, is open to all interested persons and provides access and documentation according to the OpenAPI 3.0 specification. After a one-time registration, the portal provides users with one or more API Keys. These API Keys provide immediate access to the APIs of the API Platform.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This table provides information about data sources related to two use-cases from urban and construction research, including information on: use case, name of the datasource, publisher, type of providing institution, type of source, paywalls, download/API availability, data format, and whether the data is structured or nun-structured. The data was collected and analyzed in the context of the DFG-funded "Fachinformationsdienst BAUdigital" and used for an analysis of the used data sources in the research fields of urban and construction research. The used sources for this work can be found in the section "References". The written analysis is currently under review and will be published at the "Netzwerk Architekturwissenschaft" 8. Forum Architekturwissenschaft "The Power of Sources" (https://architekturwissenschaft.net/).
Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
License information was derived automatically
Search API for looking up addresses and roads within the catchment. The api can search for both address and road, or either. This dataset is updated weekly from VicMap Roads and Addresses, sourced …Show full descriptionSearch API for looking up addresses and roads within the catchment. The api can search for both address and road, or either. This dataset is updated weekly from VicMap Roads and Addresses, sourced via www.data.vic.gov.au. Use The Search API uses a data.gov.au datastore and allows a user to take full advantage of full test search functionality. An sql attribute is passed to the URL to define the query against the API. Please note that the attribute must be URL encoded. The sql statement takes for form as below: SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('[term]', ' ', ' %26 ')) LIMIT 10 The above will select the top 10 results from the API matching the input 'term', and return the display name as well as an x and y coordinate. The full URL for the above query would be: https://data.gov.au/api/3/action/datastore_search_sql?sql=SELECT display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('[term]', ' ', ' %26 ')) LIMIT 10) Fields Any field in the source dataset can be returned via the API. Display, x and y are used in the example above, but any other field can be returned by altering the select component of the sql statement. See examples below. Filters Search data sources and LGA can also be used to filter results. When not using a filter, the API defaults to using all records. See examples below. Source Dataset A filter can be applied to select for a particular source dataset using the 'src' field. The currently available datasets are as follows: 1 for Roads 2 for Address 3 for Localities 4 for Parcels (CREF and SPI) 5 for Localities (Propnum) Local Government Area Filters can be applied to select for a specific local government area using the 'lga_code' field. LGA codes are derrived from Vicmap LGA datasets. Wimmeras LGAs include: 332 Horsham Rural City Council 330 Hindmarsh Shire Council 357 Northern Grampians Shire Council 371 West Wimmera Shire Council 378 Yarriambiack Shire Council Examples Search for the top 10 addresses and roads with the word 'darlot' in their names: SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('darlot', ' ', ' & ')) LIMIT 10) example Search for all roads with the word 'perkins' in their names: SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('perkins', ' ', ' %26 ')) AND src=1 example Search for all addresses with the word 'kalimna' in their names, within Horsham Rural City Council: SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('kalimna', ' ', ' %26 ')) AND src=2 and lga_code=332 example Search for the top 10 addresses and roads with the word 'green' in their names, returning just their display name, locality, x and y: SELECT distinct display, locality, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('green', ' ', ' %26 ')) LIMIT 10 example Search all addresses in Hindmarsh Shire: SELECT distinct display, locality, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE lga_code=330 example
US Census Bureau conducts American Census Survey 1 and 5 Yr surveys that record various demographics and provide public access through APIs. I have attempted to call the APIs through the python environment using the requests library, Clean, and organize the data in a usable format.
ACS Subject data [2011-2019] was accessed using Python by following the below API Link:
https://api.census.gov/data/2011/acs/acs1?get=group(B08301)&for=county:*
The data was obtained in JSON format by calling the above API, then imported as Python Pandas Dataframe. The 84 variables returned have 21 Estimate values for various metrics, 21 pairs of respective Margin of Error, and respective Annotation values for Estimate and Margin of Error Values. This data was then undergone through various cleaning processes using Python, where excess variables were removed, and the column names were renamed. Web-Scraping was carried out to extract the variables' names and replace the codes in the column names in raw data.
The above step was carried out for multiple ACS/ACS-1 datasets spanning 2011-2019 and then merged into a single Python Pandas Dataframe. The columns were rearranged, and the "NAME" column was split into two columns, namely 'StateName' and 'CountyName.' The counties for which no data was available were also removed from the Dataframe. Once the Dataframe was ready, it was separated into two new dataframes for separating State and County Data and exported into '.csv' format
More information about the source of Data can be found at the URL below:
US Census Bureau. (n.d.). About: Census Bureau API. Retrieved from Census.gov
https://www.census.gov/data/developers/about.html
I hope this data helps you to create something beautiful, and awesome. I will be posting a lot more databases shortly, if I get more time from assignments, submissions, and Semester Projects 🧙🏼♂️. Good Luck.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global API management market size was USD 4.93 Billion in 2023 and is likely to reach USD 55.10 Billion by 2032, expanding at a CAGR of 30.76 % during 2024–2032. The market growth is attributed to the surging adoption of microservices architecture and increasing digitalization.
Increasing digitalization is expected to boost the demand for API management. Businesses often need to integrate various systems, applications, and data sources as they become digital. APIs are a key tool for this integration, and API management solutions help ensure these APIs are secure, reliable, and efficient. Therefore, the rising digitalization is propelling the market.
API management is widely used by large and small enterprises as it simplifies the process of integrating various software applications, allowing different systems to communicate and share data seamlessly. Additionally, API management solutions provide features such as encryption, identity verification, and threat protection to ensure the secure exchange of data, which increases their adoption in large and small enterprises.
Artificial Intelligence (AI) is revolutionizing the API management market by introducing automation and predictive analytics into the mix. AI-powered API management tools automate routine tasks, such as monitoring and managing API traffic, thereby reducing the workload on IT teams. These tools predict potential issues before they occur, allowing businesses to proactively address them and ensure uninterrupted service. Furthermore, AI enhances API security by detecting unusual patterns and potential threats, thereby protecting sensitive data from breaches. AI's ability to analyze large volumes of data further helps businesses gain valuable insights into API performance and user behavior, which are used to improve the API's functionality and the overall user experience. Thus, AI is making API management efficient as well as intelligent and proactiv
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This list contains the government API cases collected, cleaned and analysed in the APIs4DGov study "Web API landscape: relevant general purpose ICT standards, technical specifications and terms".
The list does not represent a complete list of all government cases in Europe, as it is built to support the goals of the study and is limited to the analysis and data gathered from the following sources:
The EU open data portal
The European data portal
The INSPIRE catalogue
JoinUp: The API cases collected from the European Commission JoinUp platform
Literature-document review: the API cases gathered from the research activities of the study performed till the end of 2019
ProgrammableWeb: the ProgrammableWeb API directory
Smart 2015/0041: the database of 395 cases created by the study ‘The project Towards faster implementation and uptake of open government’ (SMART 2015/0041).
Workshops/meetings/interviews: a list of API cases collected in the workshops, surveys and interviews organised within the APIs4DGov
Each API case is classified accordingly to the following rationale:
Unique id: a unique key of each case, obtained by concatenating the following fields: (Country Code) + (Governmental level) + (Name Id) + (Type of API)
API Country or type of provider: the country in which the API case has been published
API provider: the specific provider that published and maintain the API case
Name Id: an acronym of the name of the API case (it can be not unique)
Short description
Type of API: (i) API registry, a set, catalogue, registry or directory of APIs; (ii) API platform: a platform that supports the use of APIs; (iii) API tool: a tool used to manage APIs; (iv) API standard: a set of standards related to government APIs; (v) Data catalogue, an API published to access metadata of datasets, normally published by a data catalogue; (vi) Specific API, a unique (can have many endpoints) API built for a specific purpose
Number of APIs: normally only one, in the case of API registry, the number of APIs published by the registry at the 31/12/2019
Theme: list of domains related to the API case (controlled vocabulary)
Governmental level: the geographical scope of the API (city, regional, national or international)
Country code: the country two letters internal code
Source: the source (among the ones listed in the previous) from where the API case has been gathered