Facebook
TwitterThis dataset lists out all software in use by NASA.
Facebook
TwitterThe data.gov catalog is powered by CKAN, a powerful open source data platform that includes a robust API. Please be aware that data.gov and the data.gov CKAN API only contain metadata about datasets. This metadata includes URLs and descriptions of datasets, but it does not include the actual data within each dataset.
Facebook
Twitterhttps://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Open API Market size was valued at USD 3 Billion in 2024 and is projected to reach USD 15.53 Billion by 2031, growing at a CAGR of 25.15% during the forecast period 2024-2031.Global Open API Market DefinitionOpen Application Programming Interface (API) or external API is the type of the Application Programming Interface (API), which is freely available or with limited restriction for the third party developers. Open API helps the developers to integrate the API in the open source data and related services for development of applications. Some of the prominent end-users of the open API are IT and telecommunication industry, banking financial services and insurance (BFSI) industry, healthcare, whereas it is also gaining popularity in travel and tourism sector, government and education, media and entertainment, energy and utility. Open API is designed so that it can be easily accessible to the third party developers.The open API is gaining huge attention from the developers as it provider deeper understanding regarding communication of the different software program. This type of API reduce the efforts of the developers of writing new code, which results in higher time for building unique and useful software. As this API is freely available to the public it enables in better compatibility within the applications with timely updates. The main advantage of the open API is it allows the third party developers work on complementary services that is created by primary application. This offers an opportunity for the companies to enter their services and products in the external applications that expand the reach and promote purchasing of the products and services.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository holds the testing and demonstration data for DataRig, an opensource software program for downloading datasets from data repositories utilizing RESTful APIs. This repository contains 5 sample datasets.
annotations_001.txt
This data set is a tab-separated text file containing 6 columns that start on line number 7. The column headers are;
'Number' 'Start Time' 'End Time' 'Time From Start' 'Channel' 'Annotation'
There are 13 rows of data under each of these column headers representing the start and end times of annotated events from an eeg recording file in this repository called recording_001.edf. The events describe the behavior of a mouse in 5 sec increments with each behavior being one of 'exploring', 'grooming' or 'rest'.
recording_001.edf
A European Data Format file consisting of 4 channels of EEG data lasting approximately 1 hour. The times in the annotations_001.txt file are referenced against this file.
sample_arr.npy
A numpy array of shape (4, 250) with values sequentially running from 0 to 1000.
sample_excel.xls
An excel file with a single column of 10 numbers from 0-9 sequentially.
sample_text.txt
A text file with 4 rows containing 250 values per row. The values in the file run from 0 to 1000 sequentially.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains 70,427 cross-linked Twitter-[GHTorrent](http://ghtorrent.org) user pairs identified as likely belonging to the same users. The dataset accompanies our research paper:
@inproceedings{fang2020tweet,
author = {Fang, Hongbo and Klug, Daniel and Lamba, Hemank and Herbsleb, James and Vasilescu, Bogdan},
title = {Need for Tweet: How Open Source Developers Talk About Their GitHub Work on Twitter},
booktitle = {International Conference on Mining Software Repositories (MSR)},
year = {2020},
pages = {to appear},
publisher = {ACM},
}
The data cannot be used for any purpose other than conducting research.
Due to privacy concerns, we only release the user IDs in Twitter and GHTorrent, respectively. We expect that users of this dataset will be able to collect other data using the Twitter API and GHTorrent, as needed. Please see below for an example.
To query the Twitter API for a given user_id, you can:
Apply for Twitter developer account here.
Create an APP with your Twitter developer account, and create “API key” and “API secret key”.
Obtain an access token. Given the previous
curl -u "
The response looks like this: {"token_type":"bearer","access_token":"<...>"}
Copy the "access_token".
Given the previous access token, run:
curl --request GET --url "https://api.twitter.com/1.1/users/show.json?user_id=
The GHTorrent user ids map to the users table in the MySQL version of GHTorrent. To use GHTorrent, please follow instructions on the GHTorrent website.
Facebook
TwitterOpen Source Outsourcing Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Norvegiana is a tool for making cultural heritage information more accessible as open data. Norvegiana is an open pool with cultural data from archives, museums and other cultural institutions; in total, about 300 organisations. Norvegiana contains 7.4 million records in total (as of August 2016), of which 1.9 million are images, 16,000 audio clips and 1,400 videos. Norvegiana aggregates data from various databases and information sources, and harmonises this in a common metadata model — ABM Semantic Elements. Data from Norvegiana is accessible via an open search API, and a simple website & www.norvegiana.no API delivers data on individual objects in xml, json or kml format. About 3.1 million objects are placed with coordinates. The API is open and no key is required. The following data sources are available in Norvegiana (overview as of August 2016): — DigitaltMuseum — historical photographs, objects, art (1.8 million objects) — Digitally told — digital stories (4,700 stories) — Cultural history encyclopedia Sogn og Fjordane (1,900 articles) — MUSIT archaeology — Univerist museums’ archaeology data (900,000 objects) — Coastal journey — coastal maritime cultural history (2.045 objects) — Photo SF — County Photo Sogn og Fjordane (66.000 photos) — Photo MR — County Photo Archives Møre og Romsdal (158,000 photos) — stadnamn SF — place names Sogn og Fjordane (185,000 place names) — stadnamn MR — place names Møre og Romsdal (same database as stadnamn Sogn og Fjordane; 130,000 place names) — Kildenett — Historical source and knowledge base for Trøndelag (1,500 articles) — music archives, traditional music Sogn og Fjordane (14,000 introductions) — Archive portal — archive catalogues from state, municipal and private archives (3.4 million documents) Norvegiana and the individual data sets are described in the note Norvegiana & Norvegiana & data model, mapping, content and databases. The license (i.e. rights to reuse) is at the post level, and will thus vary within the individual data set. Norvegiana is a tool for making cultural heritage information more accessible as open data. Norvegiana is an open pool with cultural data from archives, museums and other cultural institutions; in total, about 300 organisations. Norvegiana contains 7.4 million records in total (as of August 2016), of which 1.9 million are images, 16,000 audio clips and 1,400 videos. Norvegiana aggregates data from various databases and information sources, and harmonises this in a common metadata model — ABM Semantic Elements. Data from Norvegiana is accessible via an open search API, and a simple website & www.norvegiana.no API delivers data on individual objects in xml, json or kml format. About 3.1 million objects are placed with coordinates. The API is open and no key is required. The following data sources are available in Norvegiana (overview as of August 2016): — DigitaltMuseum — historical photographs, objects, art (1.8 million objects) — Digitally told — digital stories (4,700 stories) — Cultural history encyclopedia Sogn og Fjordane (1,900 articles) — MUSIT archaeology — Univerist museums’ archaeology data (900,000 objects) — Coastal journey — coastal maritime cultural history (2.045 objects) — Photo SF — County Photo Sogn og Fjordane (66.000 photos) — Photo MR — County Photo Archives Møre og Romsdal (158,000 photos) — stadnamn SF — place names Sogn og Fjordane (185,000 place names) — stadnamn MR — place names Møre og Romsdal (same database as stadnamn Sogn og Fjordane; 130,000 place names) — Kildenett — Historical source and knowledge base for Trøndelag (1,500 articles) — music archives, traditional music Sogn og Fjordane (14,000 introductions) — Archive portal — archive catalogues from state, municipal and private archives (3.4 million documents) Norvegiana and the individual data sets are described in the note Norvegiana & Norvegiana & data model, mapping, content and databases. The license (i.e. rights to reuse) is at the post level, and will thus vary within the individual data set.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the Financial Data Exchange API Integration market size reached USD 3.42 billion globally in 2024. The market is experiencing a robust expansion, registering a CAGR of 23.1% from 2025 to 2033. By the end of 2033, the market is forecasted to attain a value of USD 25.09 billion. This remarkable growth trajectory is propelled by the increasing adoption of open banking, regulatory mandates for data transparency, and the growing demand for seamless connectivity between financial institutions, fintech firms, and third-party service providers.
One of the most significant growth factors driving the Financial Data Exchange API Integration market is the widespread adoption of open banking initiatives across the globe. Regulatory frameworks such as PSD2 in Europe, the Consumer Data Right in Australia, and similar policies in North America are compelling banks and financial institutions to provide secure, standardized API access to customer data. This not only enhances customer experience by enabling personalized financial services but also fosters innovation by allowing third-party developers to build novel financial products. As a result, the market is witnessing a surge in demand for robust, scalable, and secure API integration solutions that can handle complex data exchange requirements while ensuring compliance with evolving regulatory standards.
Another pivotal driver fueling the market’s expansion is the rapid digital transformation within the financial services sector. Financial institutions are increasingly leveraging APIs to enhance operational efficiency, streamline workflows, and deliver real-time services such as instant payments, automated wealth management, and digital lending. The proliferation of fintech startups and the entry of technology giants into the financial domain have further intensified the need for seamless data connectivity and interoperability. This has led to a significant uptick in investments in API integration platforms and services, as organizations seek to modernize legacy systems, reduce integration complexities, and accelerate time-to-market for new digital offerings.
The growing emphasis on customer-centricity and data-driven decision-making is also contributing to the robust growth of the Financial Data Exchange API Integration market. Financial institutions are increasingly harnessing APIs to aggregate and analyze vast volumes of customer data from multiple sources, enabling them to deliver hyper-personalized products, improve risk assessment, and enhance fraud detection capabilities. The integration of advanced technologies such as artificial intelligence, machine learning, and blockchain with financial data exchange APIs is opening up new avenues for innovation, further amplifying the market’s growth potential. Moreover, the shift towards cloud-based API integration solutions is enabling organizations to achieve greater scalability, flexibility, and cost-efficiency, thereby accelerating the adoption of API-driven architectures across the financial ecosystem.
From a regional perspective, North America currently dominates the Financial Data Exchange API Integration market, accounting for the largest share in 2024, followed closely by Europe and Asia Pacific. The presence of a highly developed financial services infrastructure, early adoption of open banking regulations, and a vibrant fintech ecosystem are key factors contributing to North America’s leadership. However, the Asia Pacific region is expected to exhibit the fastest growth during the forecast period, driven by rapid digitalization, increasing smartphone penetration, and supportive government policies promoting financial inclusion. Europe remains a significant market due to its stringent regulatory environment and proactive stance on data privacy and security. Meanwhile, Latin America and the Middle East & Africa are gradually emerging as promising markets, fueled by rising investments in fintech and digital banking initiatives.
The Component segment of the Financial Data Exchange API Integration market is categorized into Software, Services, and Platforms. Software solutions form the backbone of API integration, providing the essential tools and frameworks required to establish secure, scalable, and co
Facebook
Twitterhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.htmlhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html
Replication pack, FSE2018 submission #164: ------------------------------------------
**Working title:** Ecosystem-Level Factors Affecting the Survival of Open-Source Projects: A Case Study of the PyPI Ecosystem **Note:** link to data artifacts is already included in the paper. Link to the code will be included in the Camera Ready version as well. Content description =================== - **ghd-0.1.0.zip** - the code archive. This code produces the dataset files described below - **settings.py** - settings template for the code archive. - **dataset_minimal_Jan_2018.zip** - the minimally sufficient version of the dataset. This dataset only includes stats aggregated by the ecosystem (PyPI) - **dataset_full_Jan_2018.tgz** - full version of the dataset, including project-level statistics. It is ~34Gb unpacked. This dataset still doesn't include PyPI packages themselves, which take around 2TB. - **build_model.r, helpers.r** - R files to process the survival data (`survival_data.csv` in **dataset_minimal_Jan_2018.zip**, `common.cache/survival_data.pypi_2008_2017-12_6.csv` in **dataset_full_Jan_2018.tgz**) - **Interview protocol.pdf** - approximate protocol used for semistructured interviews. - LICENSE - text of GPL v3, under which this dataset is published - INSTALL.md - replication guide (~2 pages)
Replication guide ================= Step 0 - prerequisites ---------------------- - Unix-compatible OS (Linux or OS X) - Python interpreter (2.7 was used; Python 3 compatibility is highly likely) - R 3.4 or higher (3.4.4 was used, 3.2 is known to be incompatible) Depending on detalization level (see Step 2 for more details): - up to 2Tb of disk space (see Step 2 detalization levels) - at least 16Gb of RAM (64 preferable) - few hours to few month of processing time Step 1 - software ---------------- - unpack **ghd-0.1.0.zip**, or clone from gitlab: git clone https://gitlab.com/user2589/ghd.git git checkout 0.1.0 `cd` into the extracted folder. All commands below assume it as a current directory. - copy `settings.py` into the extracted folder. Edit the file: * set `DATASET_PATH` to some newly created folder path * add at least one GitHub API token to `SCRAPER_GITHUB_API_TOKENS` - install docker. For Ubuntu Linux, the command is `sudo apt-get install docker-compose` - install libarchive and headers: `sudo apt-get install libarchive-dev` - (optional) to replicate on NPM, install yajl: `sudo apt-get install yajl-tools` Without this dependency, you might get an error on the next step, but it's safe to ignore. - install Python libraries: `pip install --user -r requirements.txt` . - disable all APIs except GitHub (Bitbucket and Gitlab support were not yet implemented when this study was in progress): edit `scraper/init.py`, comment out everything except GitHub support in `PROVIDERS`. Step 2 - obtaining the dataset ----------------------------- The ultimate goal of this step is to get output of the Python function `common.utils.survival_data()` and save it into a CSV file: # copy and paste into a Python console from common import utils survival_data = utils.survival_data('pypi', '2008', smoothing=6) survival_data.to_csv('survival_data.csv') Since full replication will take several months, here are some ways to speedup the process: ####Option 2.a, difficulty level: easiest Just use the precomputed data. Step 1 is not necessary under this scenario. - extract **dataset_minimal_Jan_2018.zip** - get `survival_data.csv`, go to the next step ####Option 2.b, difficulty level: easy Use precomputed longitudinal feature values to build the final table. The whole process will take 15..30 minutes. - create a folder `
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Victorian Government Open Data Directory uses CKAN to surface thousands of datasets from across the Victorian Government. This API is based on version 2.7.3 of the CKAN API documentation and …Show full descriptionThe Victorian Government Open Data Directory uses CKAN to surface thousands of datasets from across the Victorian Government. This API is based on version 2.7.3 of the CKAN API documentation and provides access to CKAN functionality for the purpose of integrating with other CKAN instances. It can also be used as a data source for other applications to search and download the datasets provided by Data.Vic. This API is the updated version that implements the Whole of Victorian Government API Design Standards for RESTful APIs developed by the Victorian Government.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset comprises detailed information about GitHub repositories, issues, and pull requests, collected using the GitHub API. The data includes repository metadata (such as stars, forks, and open issues), along with historical data on issues and pull requests (PRs), including their creation, closure, and merging timelines.
This dataset contains information about GitHub repositories, including metadata such as stars, forks, and activity status.
| Column Name | Data Type | Description |
|---|---|---|
id | object | Unique identifier for the repository. |
name | object | Name of the repository (e.g., "docker"). |
full_name | object | Full name of the repository (e.g., "prometheus/alertmanager"). |
description | object | Description of the repository, may be empty. |
stars | int64 | Number of stars the repository has. |
forks | int64 | Number of times the repository has been forked. |
open_issues | int64 | Number of open issues in the repository. |
created_at | datetime | Date and time when the repository was created. |
updated_at | datetime | Date and time when the repository was last updated. |
size_category | object | Categorization of the repository based on the number of stars (micro, small, medium, large, mega). |
stale | bool | Boolean flag indicating if the repository is "stale" (hasn't been updated in over 6 months). |
stars_per_fork | float64 | Number of stars per fork (calculated). |
stars_per_issue | float64 | Number of stars per open issue (calculated). |
contributor_per_star | float64 | Number of contributors per star (calculated). |
total_contributors | int64 | Total number of contributors from issues and pull requests. |
This dataset contains details of issues raised in the repositories, including information about their creation, closing, and state.
| Column Name | Data Type | Description |
|---|---|---|
id | object | Unique identifier for the issue. |
created_at | datetime | Date and time when the issue was created. |
updated_at | datetime | Date and time when the issue was last updated. |
closed_at | datetime | Date and time when the issue was closed (optional, null if open). |
number | int64 | Issue number in the GitHub repository. |
repository | object | The repository that the issue belongs to (name). |
state | object | Current state of the issue (either "open" or "closed"). |
title | object | Title of the issue. |
resolution_time_days | float64 | Number of days taken to resolve the issue (calculated, -1 for unresolved issues). |
This dataset contains information about pull requests (PRs) in the repositories, including metadata such as their state, creation, closing, and merging time.
| Column Name | Data Type | Description |
|---|---|---|
id | object | Unique identifier for the pull request. |
created_at | datetime | Date and time when the pull request was created. |
updated_at | datetime | Date and time when the pull request was last updated. |
closed_at | datetime | Date and time when the pull request was closed (optional, null if open). |
merged_at | datetime | Date and time when the pull request was merged (optional, null if not merge... |
Facebook
TwitterThe National API Directory Search API in National data catalogue contains all API descriptions published in the National data catalogue.
The API description can also contain information about whether everyone can access the API, whether the license is open, whether the API is free to use, whether the API is a source of an authoritative source, price when using the API, limitations on the number of calls to the API, response time, uptime, service type (default), status of the API and associated dataset descriptions.
Objective: To make all API-descriptions in the The National API Directory Search API in National data catalogue available for downloading Distributions.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Glycoinformatics is a critical resource for the study of glycobiology, and glycobiology is a necessary component for understanding the complex interface between intra- and extracellular spaces. Despite this, there is limited software available to scientists studying these topics, requiring each to create fundamental data structures and representations anew for each of their applications. This leads to poor uptake of standardization and loss of focus on the real problems. We present glypy, a library written in Python for reading, writing, manipulating, and transforming glycans at several levels of precision. In addition to understanding several common formats for textual representation of glycans, the library also provides application programming interfaces (APIs) for major community databases, including GlyTouCan and UnicarbKB. The library is freely available under the Apache 2 common license with source code available at https://github.com/mobiusklein/ and documentation at https://glypy.readthedocs.io/.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Satellite design encompasses a multitude of steps from concept to flight. Mission specification to flight can take several years, depending on the scope, requirements and budget of the mission. The process also requires a wide range of design and management tools, with limited consistency data interchange capability, and a lack of coherency. Detailing the relationships between the satellite configuration, inventory control systems, life cycle management, design, analysis and test data is difficult at best. No tool exists that meets these needs for the general satellite design, system engineering and integration process. Sci_Zone is proposing our innovative Satellite Design Automation architecture SatBuilder Designer, in conjunction with the OpenSAT open database architecture to meet this need. OpenSAT seamlessly integrates existing detail design tools with SatBuilder Designer, as well as databases tracking requirements, components and inventory, with the final configuration of the satellite. SatBuilder Designer, an AI based toolset, provides for rapid design via design wizards and integration to existing design tools; it provides coherency between a range of applications and data sets. OpenSAT stores and distributes supporting satellite design, configuration, mission and test data from a centralized database server and can distribute the data across multiple platforms and via the internet.
Facebook
Twitterhttps://straitsresearch.com/privacy-policyhttps://straitsresearch.com/privacy-policy
The global open API market size is projected to grow from USD 5.61 billion in 2025 to USD 31.03 billion by 2033, exhibiting a CAGR of 23.83%.
Report Scope:
| Report Metric | Details |
|---|---|
| Market Size in 2024 | USD 4.53 Billion |
| Market Size in 2025 | USD 5.61 Billion |
| Market Size in 2033 | USD 31.03 Billion |
| CAGR | 23.83% (2025-2033) |
| Base Year for Estimation | 2024 |
| Historical Data | 2021-2023 |
| Forecast Period | 2025-2033 |
| Report Coverage | Revenue Forecast, Competitive Landscape, Growth Factors, Environment & Regulatory Landscape and Trends |
| Segments Covered | By Type,By Application,By Region. |
| Geographies Covered | North America, Europe, APAC, Middle East and Africa, LATAM, |
| Countries Covered | U.S., Canada, U.K., Germany, France, Spain, Italy, Russia, Nordic, Benelux, China, Korea, Japan, India, Australia, Taiwan, South East Asia, UAE, Turkey, Saudi Arabia, South Africa, Egypt, Nigeria, Brazil, Mexico, Argentina, Chile, Colombia, |
Facebook
TwitterThis dataset is retired as of December 5, 2021 & will not be updated. The Open311 API system is open source data and contains service request information relating to potholes and graffiti reports (5 types of graffiti requests). This dataset does not contain any personal identifying information. Data will be entered by citizens with their smart phones, entered online or reported by phone to a Customer Service Representative (CSR). Graffiti reported on private property will be triaged by a 311 CSR to ensure no personal information is disclosed further. For more information on Open311 and the collaborative effort to create an open standard for 311 services, visit the Open311 website. Open311 API Toronto Information sheet: http://www.toronto.ca/311/open311.htm Conforms to the Open311 API specification: http://wiki.open311.org/GeoReport_v2 This data is collected through a variety of channels available to 311. The City is creating a channel to allow citizens to use 3rd party mobile applications to report 311 service requests related to graffiti and potholes. This dataset does not contain any personal identifying information.
Facebook
TwitterAll of the ERS mapping applications, such as the Food Environment Atlas and the Food Access Research Atlas, use map services developed and hosted by ERS as the source for their map content. These map services are open and freely available for use outside of the ERS map applications. Developers can include ERS maps in applications through the use of the map service REST API, and desktop GIS users can use the maps by connecting to the map server directly.
Facebook
TwitterJiaxing Open Source Shengshi Cn Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterGlobal Shares Data Reference data on more than 80K stocks worldwide. Historical data from 2000 onwards. Pay only for the parameters you need. Flexible in customizing our product to the customer's needs. Free test access as long as you need for integration. Reliable sources: issues documents, disclosure website, global depositories data and other open sources. The cost depends on the amount of required parameters and re-distribution right.
Facebook
Twitterhttps://en.wikipedia.org/wiki/Public_domainhttps://en.wikipedia.org/wiki/Public_domain
Open Data Inception is a project that compiles a comprehensive list of open data portals worldwide. It provides a geotagged, searchable map and list of these portals, making it easier for users to find clean, usable open data by country or topic. The initiative aims to address the challenge of locating reliable data sources, offering a user-friendly resource with an API for data enthusiasts and researchers. The project also explores standardizing metadata to improve data discoverability.Open Data Inception relies on crowsourcing and anyone can suggest the addition of a portal via this form.
Facebook
TwitterThis dataset lists out all software in use by NASA.