DataForSEO Labs API offers three powerful keyword research algorithms and historical keyword data:
• Related Keywords from the “searches related to” element of Google SERP. • Keyword Suggestions that match the specified seed keyword with additional words before, after, or within the seed key phrase. • Keyword Ideas that fall into the same category as specified seed keywords. • Historical Search Volume with current cost-per-click, and competition values.
Based on in-market categories of Google Ads, you can get keyword ideas from the relevant Categories For Domain and discover relevant Keywords For Categories. You can also obtain Top Google Searches with AdWords and Bing Ads metrics, product categories, and Google SERP data.
You will find well-rounded ways to scout the competitors:
• Domain Whois Overview with ranking and traffic info from organic and paid search. • Ranked Keywords that any domain or URL has positions for in SERP. • SERP Competitors and the rankings they hold for the keywords you specify. • Competitors Domain with a full overview of its rankings and traffic from organic and paid search. • Domain Intersection keywords for which both specified domains rank within the same SERPs. • Subdomains for the target domain you specify along with the ranking distribution across organic and paid search. • Relevant Pages of the specified domain with rankings and traffic data. • Domain Rank Overview with ranking and traffic data from organic and paid search. • Historical Rank Overview with historical data on rankings and traffic of the specified domain from organic and paid search. • Page Intersection keywords for which the specified pages rank within the same SERP.
All DataForSEO Labs API endpoints function in the Live mode. This means you will be provided with the results in response right after sending the necessary parameters with a POST request.
The limit is 2000 API calls per minute, however, you can contact our support team if your project requires higher rates.
We offer well-rounded API documentation, GUI for API usage control, comprehensive client libraries for different programming languages, free sandbox API testing, ad hoc integration, and deployment support.
We have a pay-as-you-go pricing model. You simply add funds to your account and use them to get data. The account balance doesn't expire.
https://developer.spotify.com/terms/https://developer.spotify.com/terms/
A dataset providing detailed information about Spotify's Web API endpoints including metadata access, playback control, and playlist management.
This repository contains the datasets and evaluation results of our study. For a detailed overview regarding the provided materials, please refer to README.md.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This list contains the government API cases collected, cleaned and analysed in the APIs4DGov study "Web API landscape: relevant general purpose ICT standards, technical specifications and terms".
The list does not represent a complete list of all government cases in Europe, as it is built to support the goals of the study and is limited to the analysis and data gathered from the following sources:
The EU open data portal
The European data portal
The INSPIRE catalogue
JoinUp: The API cases collected from the European Commission JoinUp platform
Literature-document review: the API cases gathered from the research activities of the study performed till the end of 2019
ProgrammableWeb: the ProgrammableWeb API directory
Smart 2015/0041: the database of 395 cases created by the study ‘The project Towards faster implementation and uptake of open government’ (SMART 2015/0041).
Workshops/meetings/interviews: a list of API cases collected in the workshops, surveys and interviews organised within the APIs4DGov
Each API case is classified accordingly to the following rationale:
Unique id: a unique key of each case, obtained by concatenating the following fields: (Country Code) + (Governmental level) + (Name Id) + (Type of API)
API Country or type of provider: the country in which the API case has been published
API provider: the specific provider that published and maintain the API case
Name Id: an acronym of the name of the API case (it can be not unique)
Short description
Type of API: (i) API registry, a set, catalogue, registry or directory of APIs; (ii) API platform: a platform that supports the use of APIs; (iii) API tool: a tool used to manage APIs; (iv) API standard: a set of standards related to government APIs; (v) Data catalogue, an API published to access metadata of datasets, normally published by a data catalogue; (vi) Specific API, a unique (can have many endpoints) API built for a specific purpose
Number of APIs: normally only one, in the case of API registry, the number of APIs published by the registry at the 31/12/2019
Theme: list of domains related to the API case (controlled vocabulary)
Governmental level: the geographical scope of the API (city, regional, national or international)
Country code: the country two letters internal code
Source: the source (among the ones listed in the previous) from where the API case has been gathered
CIMIS data is available to the public free of charge via a web Application Programming Interface (API). The CIMIS Web API delivers data over the REST protocol from an enterprise production platform. The system provides reference evapotranspiration (ETo) and weather data from the CIMIS Weather Station Network and the Spatial CIMIS System. Spatial CIMIS provides daily maps of ETo and solar radiation (Rs) data at 2-km grid by coupling remotely sensed satellite data with point measurements from the CIMIS weather stations. In summary, the data provided through the CIMIS Web API is comprised by a) Weather and ETo data registered at the CIMIS Weather Station Network (more than 150 stations located throughout the state of California and b) Spatial CIMIS System data that provides statewide ETo and solar radiation (Rs) data as well as aeraged ETo by zip-codes. The RESTful HTTP services reach a broader range of clients; including Wi-Fi aware irrigation smart controllers as well as browser and mobile applications, all while expanding the delivery options by providing data in either JSON or XML formats.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The market for online software documentation tools is experiencing robust growth, driven by the increasing adoption of cloud-based solutions, the need for improved collaboration among development teams, and the rising demand for accessible and user-friendly documentation. The market size in 2025 is estimated at $5 billion, exhibiting a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033. This substantial growth reflects a significant shift towards digital documentation solutions, replacing traditional methods. Factors such as enhanced version control, simplified knowledge management, and seamless integration with other software development tools contribute significantly to this upward trajectory. The competitive landscape is diverse, with established players like Atlassian and newcomers alike vying for market share. The segment encompassing collaborative platforms is anticipated to dominate, given the rising importance of real-time co-authoring and streamlined feedback mechanisms within agile development methodologies. Furthermore, the increasing adoption of API documentation tools reflects the growth of microservices architectures and the importance of efficient API integration. This growth is further fueled by trends such as the increasing adoption of DevOps practices, which necessitates efficient documentation workflows, and the rising demand for self-service support portals to reduce reliance on technical support teams. While the market faces restraints such as the initial investment costs associated with migrating to new platforms and the need for adequate training, these hurdles are progressively being overcome by user-friendly interfaces and readily available resources. The continued expansion into emerging markets and the integration of advanced features like AI-powered search and automated content generation will further contribute to sustained market growth in the coming years. The market's fragmentation, however, presents both opportunities and challenges for competitors, requiring strategic differentiation and targeted marketing efforts.
Eminem is one of the most influential hip-hop artists of all time, and the Rap God. I acquired this data using Spotify APs and supplemented it with other research to add to my own analysis. You can find my original analysis here: https://kaivalyapowale.com/2020/01/25/eminems-album-trends-and-music-to-be-murdered-by-2020/
My analysis was also published by top hip-hop websites: HipHop 24x7 - Data analysis reveals M2BMB is the most negative album Eminem Pro - Album's data analysis Eminem Pro - Eminem's albums are getting shorter
You can also check out visualizations on Tableau Public for some ideas: https://public.tableau.com/profile/kaivalya.powale#!/
I have primarily used data from Spotify’s API using multiple endpoints for albums and tracks. I supplemented the data with stats from Billboard and calculations from this post.
Here's the explanation for all the audio features provided by Spotify!
I have researched data about album sales from multiple sources online. They are cited in my original analysis.
Here are the Spotify's Album endpoints. Charts data from Billboard. Swear data from this source.
I'd love to see new visualizations using this data or using the sales, swear, or duration for an analysis. It would be wonderful if someone compares this with other hip-hop greats.
API Management Market Size 2025-2029
The API management market size is forecast to increase by USD 3.75 billion at a CAGR of 12.3% between 2024 and 2029.
The market is experiencing significant growth, driven by the increasing adoption of digital payment solutions and the proliferation of digital wallets. However, challenges persist, including poor internet connectivity in developing countries, which can hinder the adoption and effective implementation of Api Management solutions. Companies must navigate these challenges to capitalize on the market's potential. Strategies such as investing in offline solutions and partnering with local providers can help overcome connectivity issues and expand market reach.
Additionally, focusing on security and scalability will be crucial, as businesses demand reliable and secure Api Management solutions to support their digital initiatives. These trends reflect the digital transformation underway in various industries, as businesses seek to enhance customer experience and streamline operations. Overall, the market presents opportunities for innovation and growth, with companies that address the unique challenges of this dynamic landscape poised to succeed.
What will be the Size of the API Management Market during the forecast period?
Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
Request Free Sample
The market is experiencing significant innovation, with a focus on enhancing API Return on Investment (ROI) through multi-cloud API adoption and API-driven development. API maturity is on the rise, driving the need for advanced API logging, performance benchmarking, and usage analytics. API interoperability and standardization are crucial to addressing integration challenges in complex API ecosystems. API observability and developer experience are becoming key differentiators, with the emergence of API documentation generators and debugging tools. API adoption rates continue to grow, fueled by the increasing use of composite and hybrid cloud APIs, serverless functions, and microservices orchestration.
The market is experiencing significant growth, driven by the increasing adoption of digital payment solutions and the proliferation of digital wallets. API platform comparisons and compliance are essential for businesses navigating the diverse landscape of API offerings. API monetization strategies, such as API-led connectivity and edge computing APIs, are gaining traction. API evolution is ongoing, with a shift towards API-first design and headless CMS integration. API usage patterns are evolving, requiring new testing frameworks and security measures to address API performance optimization and vulnerabilities. Ultimately, API governance policies and discovery tools are essential for managing the complexities of API consumption and ensuring compliance in the dynamic API market.
How is this API Management Industry segmented?
The api management industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.
Deployment
Cloud
On-premises
Solution
API gateways
API lifecycle management
API security
API analytics and monitoring
API developer portals
End-user
Large enterprises
SMEs
Geography
North America
US
Canada
Europe
France
Germany
UK
Middle East and Africa
UAE
APAC
China
India
Japan
South America
Brazil
Rest of World (ROW)
By Deployment Insights
The cloud segment is estimated to witness significant growth during the forecast period. The market is experiencing significant growth, driven by the digital transformation sweeping across industries. Cloud-based API solutions dominate the market, enabling seamless communication and data transfer between applications and the cloud. This segment's dominance is attributed to the proliferation of IoT and Big Data, which enhance application interfaces for superior customer experiences. Additionally, the increasing awareness of security vulnerabilities and the demand for automation have fueled the market's expansion in sectors like BFSI, e-commerce, healthcare and life sciences, education, and retail. Cloud APIs facilitate the integration of various cloud and on-premises applications, simplifying API provisioning, activation, setup, monitoring, and troubleshooting for developers and administrators.
Agile development methodologies, such as DevOps and CI/CD, have further accelerated the adoption of cloud APIs. APIs have become essential components of modern application architectures, including microservices, event-driven, and real-time systems. GraphQL APIs and service meshes have emerged as popu
DataForSEO will land you with accurate data for a SERP monitoring solution. In particular, our SERP API provides data from:
For each of the search engines, we support all possible locations. You can set any keyword, location, and language, as well as define additional parameters, e.g. time frame, category, number of results.
You can set the device and the OS that you want to obtain SERP results for. We support Android/iOS for mobile and Windows/macOS for desktop.
We can supply you with all organic, paid, and extra Google SERP elements, including featured snippet, answer box, knowledge graph, local pack, map, people also ask, people also search, and more.
We offer well-rounded API documentation, GUI for API usage control, comprehensive client libraries for different programming languages, free sandbox API testing, ad hoc integration, and deployment support.
We have a pay-as-you-go pricing model. You simply add funds to your account and use them to get data. The account balance doesn't expire.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This list contains the list of standards and technical specifications analysed in the APIs4DGov study "Web API landscape: relevant general purpose ICT standards, technical specifications and terms". Notice that the list does not include a set of documents which are considered of (i) general purpose for the Web and (ii) consolidated background knowledge of the reader (e.g. HTTP, JSON, XML, URI, SOA, ROA, RDF, etc.).
Each document is classified accordingly to the following rationale:
Name: extended name (with acronym, if available).
TS/S: we distinguish the documents into two main categories: (1) “Technical Specification” and (2) “Standard”. The definitions of the two terms are in use in official and technical documents, including the ones of CEN, IEC, ISO and Open Geospatial Consortium (OGC). For the purposes of this dataset, we choose the definitions proposed by the OGC: (1) "Specification" or "Technical Specification" (TS): "a document written by a consortium, vendor, or user that specifies a technological area with a well-defined scope, primarily for use by developers as a guide to implementation. A specification is not necessarily a formal standard"; (2) "Standard" (S): "a document that specifies a technological area with a well-defined scope, usually by a formal standardisation body and process".
Category: each document is classified by its functional specification (Resource Representation, Protocol), Security (Authentication, Authorisation), Usability (Documentation, Design), Test, Performance, and Licence. See section 2.2 for a description of each category.
Short Description: a short description of the TS/S.
Link: URL of online document describing the TS/S.
API Type: RPC or REST, both if not specified.
Initial Release: the year when the TS/S was proposed the first time (where not available the most probable year, calculated by additional desk research, was given).
By: the organisation (i.e. standard body, consortium, vendor) or individual that proposes the standard.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the context of the EU-funded project PILOTING (No. 871542), a versatile Data Management System (DMS) was deployed and facilitated the easy integration of nine different robotic systems and various payloads and the storing of all the data observations produced during the inspections. The DMS is designed under the DMS-Data Model (DMS-DM) and operates on top of a representational state transfer (REST) application programming interface (API) that provides a simplified way to exchange data through HTTP(S) requests from a client to the server. One of its key advantages is that it provides a great deal of flexibility so the model could accommodate extensions if needed. Data is not tied to resources or methods, so REST can handle multiple types of calls, return different data formats and even change structurally with the correct implementation of hypermedia.
It supports CRUD operations via GET, POST, PUT, PATCH, and DELETE HTTP methods and stores the data on a PostgreSQL object-relational database system. The REST API has been created with Python's Django (web framework) and Django REST Framework, a powerful and flexible toolkit for building Web APIs.
The constructed document presents the communication endpoints (DMS API’s Uniform Resource Identifiers (URIs)) under which an authorized user can have access to the DMS API and subsequently to the collected data.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data relating to songs that feature on the studio and live albums of the Canadian progressive rock band Rush.
Data was collected on 14/12/2022 and includes all Rush songs that feature on albums starting from the self-titled Rush (1974) to Moving Pictures 40th Anniversary Super Deluxe (2022).
Ideas for data analysis: - Which album has the longest songs? - Has the mood of songs gotten happier, sadder, or remained the same throughout Rush's career? - Is there any correlation between popularity and key, energy or danceability? - Does the mode of a song (major or minor) have a significant impact on popularity?
Data was scraped from Spotify using the Spotify Web API: https://developer.spotify.com/documentation/web-api/quick-start/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘K-Pop Hits Through The Years’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/sberj127/kpop-hits-through-the-years on 12 November 2021.
--- Dataset description provided by original source is as follows ---
The datasets contain the top songs from the said era or year accordingly (as presented in the name of each dataset). Note that only the KPopHits90s dataset represents an era (1989-2001). Although there is a lack of easily available and reliable sources to show the actual K-Pop hits per year during the 90s, this era was still included as this time period was when the first generation of K-Pop stars appeared. Each of the other datasets represent a specific year after the 90s.
A song is considered to be a K-Pop hit during that era or year if it is included in the annual series of K-Pop Hits playlists, which is created officially by Apple Music. Note that for the dataset that represents the 90s, the playlist 90s K-Pop Essentials was used as the reference.
As someone who has a particular curiosity to the field of data science and a genuine love for the musicality in the K-Pop scene, this data set was created to make something out of the strong interest I have for these separate subjects.
I would like to express my sincere gratitude to Apple Music for creating the annual K-Pop playlists, Spotify for making their API very accessible, Spotipy for making it easier to get the desired data from the Spotify Web API, Tune My Music for automating the process of transferring one's library into another service's library and, of course, all those involved in the making of these songs and artists included in these datasets for creating such high quality music and concepts digestible even for the general public.
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data from European cities with results of test for quality addresses data algorithm (paper ISPS IJGI).Addresses data used on this paper are available on these websites:A) OpenAddresses: https://batch.openaddresses.io/dataB) OpenStreetMap: https://wiki.openstreetmap.org/wiki/Downloading_dataC) Google Places: https://developers.google.com/maps/documentation/places/web-service/overview?D) Bing: https://learn.microsoft.com/en-us/bingmaps/rest-services/locations/E) Here: https://developer.here.com/documentation/geocoding-search-api/*NOTES: Due to rights and property reasons, we can not distribute commercial and authoritative addresses data used on this study
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2018. For a description of the data collection, processing, and output methods, please see the "methods" section below. Note that the RAMP data model changed in August, 2018 and two sets of documentation are provided to describe data collection and processing before and after the change.
Methods
RAMP Data Documentation – January 1, 2017 through August 18, 2018
Data Collection
RAMP data were downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).
Data from January 1, 2017 through August 18, 2018 were downloaded in one dataset per participating IR. The following fields were downloaded for each URL, with one row per URL:
url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
impressions: The number of times the URL appears within the SERP.
clicks: The number of clicks on a URL which took users to a page outside of the SERP.
clickThrough: Calculated as the number of clicks divided by the number of impressions.
position: The position of the URL within the SERP.
country: The country from which the corresponding search originated.
device: The device used for the search.
date: The date of the search.
Following data processing describe below, on ingest into RAMP an additional field, citableContent, is added to the page level data.
Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.
More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en
Data Processing
Upon download from GSC, data are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the data which records whether each URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."
Processed data are then saved in a series of Elasticsearch indices. From January 1, 2017, through August 18, 2018, RAMP stored data in one index per participating IR.
About Citable Content Downloads
Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository content, CCD represent click activity on IR content that may correspond to research use.
CCD information is summary data calculated on the fly within the RAMP web application. As noted above, data provided by GSC include whether and how many times a URL was clicked by users. Within RAMP, a "click" is counted as a potential download, so a CCD is calculated as the sum of clicks on pages/URLs that are determined to point to citable content (as defined above).
For any specified date range, the steps to calculate CCD are:
Filter data to only include rows where "citableContent" is set to "Yes."
Sum the value of the "clicks" field on these rows.
Output to CSV
Published RAMP data are exported from the production Elasticsearch instance and converted to CSV format. The CSV data consist of one "row" for each page or URL from a specific IR which appeared in search result pages (SERP) within Google properties as described above.
The data in these CSV files include the following fields:
url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
impressions: The number of times the URL appears within the SERP.
clicks: The number of clicks on a URL which took users to a page outside of the SERP.
clickThrough: Calculated as the number of clicks divided by the number of impressions.
position: The position of the URL within the SERP.
country: The country from which the corresponding search originated.
device: The device used for the search.
date: The date of the search.
citableContent: Whether or not the URL points to a content file (ending with pdf, csv, etc.) rather than HTML wrapper pages. Possible values are Yes or No.
index: The Elasticsearch index corresponding to page click data for a single IR.
repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the index field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
Filenames for files containing these data follow the format 2018-01_RAMP_all.csv. Using this example, the file 2018-01_RAMP_all.csv contains all data for all RAMP participating IR for the month of January, 2018.
Data Collection from August 19, 2018 Onward
RAMP data are downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).
Data are downloaded in two sets per participating IR. The first set includes page level statistics about URLs pointing to IR pages and content files. The following fields are downloaded for each URL, with one row per URL:
url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
impressions: The number of times the URL appears within the SERP.
clicks: The number of clicks on a URL which took users to a page outside of the SERP.
clickThrough: Calculated as the number of clicks divided by the number of impressions.
position: The position of the URL within the SERP.
date: The date of the search.
Following data processing describe below, on ingest into RAMP a additional field, citableContent, is added to the page level data.
The second set includes similar information, but instead of being aggregated at the page level, the data are grouped based on the country from which the user submitted the corresponding search, and the type of device used. The following fields are downloaded for combination of country and device, with one row per country/device combination:
country: The country from which the corresponding search originated.
device: The device used for the search.
impressions: The number of times the URL appears within the SERP.
clicks: The number of clicks on a URL which took users to a page outside of the SERP.
clickThrough: Calculated as the number of clicks divided by the number of impressions.
position: The position of the URL within the SERP.
date: The date of the search.
Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.
More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en
Data Processing
Upon download from GSC, the page level data described above are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of page level statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the page level data which records whether each page/URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."
The data aggregated by the search country of origin and device type do not include URLs. No additional processing is done on these data. Harvested data are passed directly into Elasticsearch.
Processed data are then saved in a series of Elasticsearch indices. Currently, RAMP stores data in two indices per participating IR. One index includes the page level data, the second index includes the country of origin and device type data.
About Citable Content Downloads
Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository
The Reference Data as a Service (RDaaS) API provides a list of codesets, classifications, and concordances that are used within Statistics Canada. These resources are shared to help harmonize data, enabling better interdepartmental data integration and analysis. This dataset provides an updated version of the StatCan RDaaS API specification, originally part of the Government of Canada’s GC API Store, which permanently closed on September 29th, 2023. The archived version of the original API specification can be accessed via the Wayback Machine . The specification has been updated to the OpenAPI 3.0 (Swagger 3) standard, enabling use of current tools and features for API exploration and integration. Key interactive features of the updated specification include: * Try-It-Out Functionality: Allows a user to interact with API endpoints directly from the documentation in their browser, submitting test requests and viewing live responses. * Interactive Parameter Input: Simplifies experimentation with filters and parameters to explore API behavior. * Schema Visualization: Provides clear representations of request and response structures.
Currently, users can either view this data directly in a web browser by accessing the OGC SensorThings API endpoints, though this can be confusing to users who do not understand the SensorThings API (https://newmexicowaterdata.org/faq/#sensorthingsapi) structure which organizes sensor data through interconnected entities like Things, Locations, Datastreams, and Observations. Users who have some programming knowledge can also query this standardized sensor data with the Python programming language following this tutorial (https://developer.newmexicowaterdata.org/help), or perform CRUD operations using the comprehensive API documentation (https://developers.sensorup.com/docs/). Development is currently underway for applications that more easily allow general users to query and visualize this environmental monitoring data from the New Mexico Bureau of Geology and Mineral Resources without requiring technical knowledge of the underlying API structure.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2017. For a description of the data collection, processing, and output methods, please see the "methods" section below.
Methods RAMP Data Documentation – January 1, 2017 through August 18, 2018
Data Collection
RAMP data are downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).
Data from January 1, 2017 through August 18, 2018 were downloaded in one dataset per participating IR. The following fields were downloaded for each URL, with one row per URL:
url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
impressions: The number of times the URL appears within the SERP.
clicks: The number of clicks on a URL which took users to a page outside of the SERP.
clickThrough: Calculated as the number of clicks divided by the number of impressions.
position: The position of the URL within the SERP.
country: The country from which the corresponding search originated.
device: The device used for the search.
date: The date of the search.
Following data processing describe below, on ingest into RAMP an additional field, citableContent, is added to the page level data.
Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.
More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en
Data Processing
Upon download from GSC, data are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the data which records whether each URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."
Processed data are then saved in a series of Elasticsearch indices. From January 1, 2017, through August 18, 2018, RAMP stored data in one index per participating IR.
About Citable Content Downloads
Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository content, CCD represent click activity on IR content that may correspond to research use.
CCD information is summary data calculated on the fly within the RAMP web application. As noted above, data provided by GSC include whether and how many times a URL was clicked by users. Within RAMP, a "click" is counted as a potential download, so a CCD is calculated as the sum of clicks on pages/URLs that are determined to point to citable content (as defined above).
For any specified date range, the steps to calculate CCD are:
Filter data to only include rows where "citableContent" is set to "Yes."
Sum the value of the "clicks" field on these rows.
Output to CSV
Published RAMP data are exported from the production Elasticsearch instance and converted to CSV format. The CSV data consist of one "row" for each page or URL from a specific IR which appeared in search result pages (SERP) within Google properties as described above.
The data in these CSV files include the following fields:
url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
impressions: The number of times the URL appears within the SERP.
clicks: The number of clicks on a URL which took users to a page outside of the SERP.
clickThrough: Calculated as the number of clicks divided by the number of impressions.
position: The position of the URL within the SERP.
country: The country from which the corresponding search originated.
device: The device used for the search.
date: The date of the search.
citableContent: Whether or not the URL points to a content file (ending with pdf, csv, etc.) rather than HTML wrapper pages. Possible values are Yes or No.
index: The Elasticsearch index corresponding to page click data for a single IR.
repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the index field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
Filenames for files containing these data follow the format 2017-01_RAMP_all.csv. Using this example, the file 2017-01_RAMP_all.csv contains all data for all RAMP participating IR for the month of January, 2017.
References
Google, Inc. (2021). Search Console APIs. Retrieved from https://developers.google.com/webmaster-tools/search-console-api-original.
WONDER online databases include county-level Compressed Mortality (death certificates) since 1979; county-level Multiple Cause of Death (death certificates) since 1999; county-level Natality (birth certificates) since 1995; county-level Linked Birth / Death records (linked birth-death certificates) since 1995; state & large metro-level United States Cancer Statistics mortality (death certificates) since 1999; state & large metro-level United States Cancer Statistics incidence (cancer registry cases) since 1999; state and metro-level Online Tuberculosis Information System (TB case reports) since 1993; state-level Sexually Transmitted Disease Morbidity (case reports) since 1984; state-level Vaccine Adverse Event Reporting system (adverse reaction case reports) since 1990; county-level population estimates since 1970. The WONDER web server also hosts the Data2010 system with state-level data for compliance with Healthy People 2010 goals since 1998; the National Notifiable Disease Surveillance System weekly provisional case reports since 1996; the 122 Cities Mortality Reporting System weekly death reports since 1996; the Prevention Guidelines database (book in electronic format) published 1998; the Scientific Data Archives (public use data sets and documentation); and links to other online data sources on the "Topics" page.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is a crawl of the Web API provided by the Brazilian Federal Government for open budget data under the transparency portal (http://www.portaltransparencia.gov.br).The datasets included in this crawl are the procurement (licitações), contracts (contratos) and government organizations (Organizações SIAFI). An introduction to the APIs is provided (http://www.portaltransparencia.gov.br/api-de-dados) and there is a swagger documentation available at http://www.transparencia.gov.br/swagger-ui.html.Two additional undocumented APIs were also crawled:- (/criterios/contratos/fornecedor/autocomplete) Surrogate IDs from CNPJ (a fiscal organization identifier);- (/pessoa-juridica/{id}/participante-licitacao/resultado) Participation of contractors in procurements (contractors are identified by their surrogate ID, not their CNPJ).These undocumented APIs where only crawled for contractors that had contracts with organization number 26246 (Federal University of Santa Catarina).The crawl includes data up to January 31st, 2020. The aforementioned datasets are updated monthly. Software used to perform this crawl can be found at https://bitbucket.org/alexishuf/compsac-2020-experiments. This crawl-all.sh script does the full crawl (this requires 4 hours or more). More details of the crawling procedures can be found in the EXPERIMENTS.md file.
DataForSEO Labs API offers three powerful keyword research algorithms and historical keyword data:
• Related Keywords from the “searches related to” element of Google SERP. • Keyword Suggestions that match the specified seed keyword with additional words before, after, or within the seed key phrase. • Keyword Ideas that fall into the same category as specified seed keywords. • Historical Search Volume with current cost-per-click, and competition values.
Based on in-market categories of Google Ads, you can get keyword ideas from the relevant Categories For Domain and discover relevant Keywords For Categories. You can also obtain Top Google Searches with AdWords and Bing Ads metrics, product categories, and Google SERP data.
You will find well-rounded ways to scout the competitors:
• Domain Whois Overview with ranking and traffic info from organic and paid search. • Ranked Keywords that any domain or URL has positions for in SERP. • SERP Competitors and the rankings they hold for the keywords you specify. • Competitors Domain with a full overview of its rankings and traffic from organic and paid search. • Domain Intersection keywords for which both specified domains rank within the same SERPs. • Subdomains for the target domain you specify along with the ranking distribution across organic and paid search. • Relevant Pages of the specified domain with rankings and traffic data. • Domain Rank Overview with ranking and traffic data from organic and paid search. • Historical Rank Overview with historical data on rankings and traffic of the specified domain from organic and paid search. • Page Intersection keywords for which the specified pages rank within the same SERP.
All DataForSEO Labs API endpoints function in the Live mode. This means you will be provided with the results in response right after sending the necessary parameters with a POST request.
The limit is 2000 API calls per minute, however, you can contact our support team if your project requires higher rates.
We offer well-rounded API documentation, GUI for API usage control, comprehensive client libraries for different programming languages, free sandbox API testing, ad hoc integration, and deployment support.
We have a pay-as-you-go pricing model. You simply add funds to your account and use them to get data. The account balance doesn't expire.