81 datasets found
  1. p

    Geo Open - IP address geolocation per country in MMDB format

    • data.public.lu
    mmdb
    Updated Nov 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Computer Incident Response Center Luxembourg (2025). Geo Open - IP address geolocation per country in MMDB format [Dataset]. https://data.public.lu/en/datasets/61f12bb8a2a4fae49573cbbc/?resources=all
    Explore at:
    mmdb(75559274), mmdb(11369776), mmdb(10432566), mmdb(75509660), mmdb(10783528), mmdb(10659330), mmdb(67213033), mmdb(9664234), mmdb(9370424), mmdb(68142342), mmdb(9796548), mmdb(10344182), mmdb(10610674), mmdb(74874333), mmdb(10660474), mmdb(9428898), mmdb(10417286), mmdb(69501170), mmdb(10311426), mmdb(10782930), mmdb(9426578), mmdb(10329550), mmdb(10363578), mmdb(9403704), mmdb(74657407), mmdb(9413064), mmdb(73952250), mmdb(67457616), mmdb(10654162), mmdb(9419858), mmdb(74061548), mmdb(67299670), mmdb(74731206), mmdb(9491426), mmdb(72980863), mmdb(9273480), mmdb(9396216), mmdb(10128894), mmdb(10187610), mmdb(9276216), mmdb(9491850), mmdb(10607002), mmdb(9964606), mmdb(9466514), mmdb(70013903), mmdb(72597995), mmdb(9403256), mmdb(9285408), mmdb(73364057), mmdb(69153266), mmdb(10195478), mmdb(9406664), mmdb(10028790), mmdb(71966175), mmdb(10611870), mmdb(10097478), mmdb(9279800), mmdb(9414904), mmdb(74047220), mmdb(9387648), mmdb(10620410), mmdb(74365402), mmdb(73895303), mmdb(67315508), mmdb(9280472), mmdb(10677498), mmdb(67885793), mmdb(74430269), mmdb(9326208), mmdb(71630350), mmdb(73479884), mmdb(71821414), mmdb(10562402), mmdb(10519550), mmdb(9901084), mmdb(9514250), mmdb(10600914), mmdb(10530214), mmdb(9444026), mmdb(73865542), mmdb(71293535), mmdb(9276592), mmdb(9269824), mmdb(10725746), mmdb(67243969), mmdb(10668106), mmdb(74201290), mmdb(9349608), mmdb(10263266), mmdb(9843284), mmdb(74204542), mmdb(9303264), mmdb(73007493), mmdb(9801012), mmdb(10278862), mmdb(9688864), mmdb(10314146), mmdb(75278387), mmdb(9367200), mmdb(71989592), mmdb(74939267), mmdb(74587520), mmdb(10734586), mmdb(73944255), mmdb(10642342), mmdb(72120440), mmdb(10153102), mmdb(74196517), mmdb(69095853), mmdb(11129464), mmdb(77816310), mmdb(77672856), mmdb(11121464), mmdb(10382730), mmdb(9277856), mmdb(10692986), mmdb(9370624), mmdb(9941900), mmdb(10754226), mmdb(72352123), mmdb(9425722), mmdb(70514489), mmdb(10535506), mmdb(9398168), mmdb(9375064), mmdb(71529791), mmdb(10211558), mmdb(74326507), mmdb(9640090), mmdb(9348184), mmdb(10628502), mmdb(68387114), mmdb(75221957), mmdb(10734970), mmdb(72865721), mmdb(73027607), mmdb(69443906), mmdb(72376809), mmdb(70109053), mmdb(10613794), mmdb(10289042), mmdb(10107878), mmdb(72475214), mmdb(9547964), mmdb(10163366), mmdb(10249894), mmdb(70898406), mmdb(10266040), mmdb(10223050), mmdb(10457114), mmdb(67319586), mmdb(73280284), mmdb(10364718), mmdb(9867892), mmdb(9273232), mmdb(72628081), mmdb(77810290), mmdb(11101352), mmdb(79155419), mmdb(79511067)Available download formats
    Dataset updated
    Nov 5, 2025
    Dataset authored and provided by
    Computer Incident Response Center Luxembourg
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    Geo Open is an IP address geolocation per country in MMDB format. The database can be used as a replacement for software using the MMDB format. Information about MMDB format: https://maxmind.github.io/MaxMind-DB/ Open source server using Geo Open: https://github.com/adulau/mmdb-server Open source library to read MMDB file: https://github.com/maxmind/MaxMind-DB-Reader-python Historical dataset: https://cra.circl.lu/opendata/geo-open/ The database is automatically generated from public BGP AS announces matching the country code. The precision is at country level.

  2. Z

    Open Context Database SQL Dump: Legacy Schema Tables and New Schema Tables

    • data-staging.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric C. Kansa (2024). Open Context Database SQL Dump: Legacy Schema Tables and New Schema Tables [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_7783356
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Open Context
    Authors
    Eric C. Kansa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py

    The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).

    In 2023, the Open Context team finished migration of from a legacy database schema to a revised and refactored database schema with stricter referential integrity and better consistency across tables. During this process, the Open Context team de-duplicated records, cleaned some metadata, and redacted attribute data left over from records that had been incompletely deleted in the legacy schema.

    This database dump includes all Open Context data organized with the legacy schema (table names that start with the 'oc_' or 'link_' prefixes) along with all Open Context data after cleanup and migration to the new database schema (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.

    These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.

    IMPORTANT

    This database dump contains data from roughly 180 different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.

  3. Github Advisory Dataset

    • kaggle.com
    zip
    Updated Sep 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aswin Jose (2023). Github Advisory Dataset [Dataset]. https://www.kaggle.com/datasets/aswinjose/github-advisory-dataset
    Explore at:
    zip(5782544 bytes)Available download formats
    Dataset updated
    Sep 30, 2023
    Authors
    Aswin Jose
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    All advisories acknowledged by GitHub are stored as individual files in this repository. They are formatted in the Open Source Vulnerability (OSV) format.

    You can submit a pull request to this database (see, Contributions) to change or update the information in each advisory.

    Pull requests will be reviewed and either merged or closed by our internal security advisory curation team. If the advisory originated from a GitHub repository, we will also @mention the original publisher for optional commentary.

    We add advisories to the GitHub Advisory Database from the following sources: - Security advisories reported on GitHub - The National Vulnerability Database - The npm Security Advisories Database - The FriendsOfPHP Database - The Go Vulnerability Database - The Python Packaging Advisory Database - The Ruby Advisory Database - The RustSec Advisory Database

  4. Z

    Open Context Database SQL Dump

    • nde-dev.biothings.io
    • data.niaid.nih.gov
    • +2more
    Updated Jan 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kansa, Eric (2025). Open Context Database SQL Dump [Dataset]. https://nde-dev.biothings.io/resources?id=zenodo_14728228
    Explore at:
    Dataset updated
    Jan 23, 2025
    Dataset provided by
    Kansa, Sarah Whitcher
    Kansa, Eric
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py

    The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).

    This database dump includes all published structured data organized used by Open Context (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.

    These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.

    IMPORTANT

    This database dump contains data from roughly 190+ different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.

  5. z

    Open-source traffic and CO2 emission dataset for commercial aviation

    • zenodo.org
    • data.niaid.nih.gov
    csv
    Updated Nov 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antoine Salgas; Antoine Salgas; Junzi Sun; Junzi Sun; Scott Delbecq; Scott Delbecq; Thomas Planès; Thomas Planès; Gilles Lafforgue; Gilles Lafforgue (2023). Open-source traffic and CO2 emission dataset for commercial aviation [Dataset]. http://doi.org/10.5281/zenodo.10125899
    Explore at:
    csvAvailable download formats
    Dataset updated
    Nov 17, 2023
    Dataset provided by
    ISAE-SUPAERO
    Authors
    Antoine Salgas; Antoine Salgas; Junzi Sun; Junzi Sun; Scott Delbecq; Scott Delbecq; Thomas Planès; Thomas Planès; Gilles Lafforgue; Gilles Lafforgue
    License

    https://www.gnu.org/licenses/gpl-3.0-standalone.htmlhttps://www.gnu.org/licenses/gpl-3.0-standalone.html

    Time period covered
    Oct 30, 2023
    Description

    [Deprecated version, used in the support article, please download the last version]

    This record is a global open-source passenger air traffic dataset primarily dedicated to the research community.
    It gives a seating capacity available on each origin-destination route for a given year, 2019, and the associated aircraft and airline when this information is available.

    Context on the original work is given in the related article (https://journals.open.tudelft.nl/joas/article/download/7201/5683) and on the associated GitHub page (https://github.com/AeroMAPS/AeroSCOPE/).
    A simple data exploration interface will be available at www.aeromaps.eu/aeroscope.
    The dataset was created by aggregating various available open-source databases with limited geographical coverage. It was then completed using a route database created by parsing Wikipedia and Wikidata, on which the traffic volume was estimated using a machine learning algorithm (XGBoost) trained using traffic and socio-economical data.


    1- DISCLAIMER


    The dataset was gathered to allow highly aggregated analyses of the air traffic, at the continental or country levels. At the route level, the accuracy is limited as mentioned in the associated article and improper usage could lead to erroneous analyses.


    2- DESCRIPTION

    Each data entry represents an (Origin-Destination-Operator-Aircraft type) tuple.

    Please refer to the support article for more details (see above).

    The dataset contains the following columns:

    • "First column" : index
    • airline_iata : IATA code of the operator in nominal cases. An ICAO -> IATA code conversion was performed for some sources, and the ICAO code was kept if no match was found.
    • acft_icao : ICAO code of the aircraft type
    • acft_class : Aircraft class identifier, own classification.
      • WB: Wide Body
      • NB: Narrow Body
      • RJ: Regional Jet
      • PJ: Private Jet
      • TP: Turbo Propeller
      • PP: Piston Propeller
      • HE: Helicopter
      • OTHER
    • seymour_proxy: Aircraft code for Seymour Surrogate (https://doi.org/10.1016/j.trd.2020.102528), own classification to derive proxy aircraft when nominal aircraft type unavailable in the aircraft performance model.
    • source: Original data source for the record, before compilation and enrichment.
      • ANAC: Brasilian Civil Aviation Authorities
      • AUS Stats: Australian Civil Aviation Authorities
      • BTS: US Bureau of Transportation Statistics T100
      • Estimation: Own model, estimation on Wikipedia-parsed route database
      • Eurocontrol: Aggregation and enrichment of R&D database
      • OpenSky
      • World Bank
    • seats: Number of seats available for the data entry, AFTER airport residual scaling
    • n_flights: Number of flights of the data entry, when available
    • iata_departure, iata_arrival : IATA code of the origin and destination airports. Some BTS inhouse identifiers could remain but it is marginal.
    • departure_lon, departure_lat, arrival_lon, arrival_lat : Origin and destination coordinates, could be NaN if the IATA identifier is erroneous
    • departure_country, arrival_country: Origin and destination country ISO2 code. WARNING: disable NA (Namibia) as default NaN at import
    • departure_continent, arrival_continent: Origin and destination continent code. WARNING: disable NA (North America) as default NaN at import
    • seats_no_est_scaling: Number of seats available for the data entry, BEFORE airport residual scaling
    • distance_km: Flight distance (km)
    • ask: Available Seat Kilometres
    • rpk: Revenue Passenger Kilometres (simple calculation from ASK using IATA average load factor)
    • fuel_burn_seymour: Fuel burn per flight (kg) when seymour proxy available
    • fuel_burn: Total fuel burn of the data entry (kg)
    • co2: Total CO2 emissions of the data entry (kg)
    • domestic: Domestic/international boolean (Domestic=1, International=0)

    3- Citation

    Please cite the support paper instead of the dataset itself.

    Salgas, A., Sun, J., Delbecq, S., Planès, T., & Lafforgue, G. (2023). Compilation of an open-source traffic and CO2 emissions dataset for commercial aviation. Journal of Open Aviation Science. https://doi.org/10.59490/joas.2023.7201

  6. Data from: WikiDBs - A Large-Scale Corpus Of Relational Databases From...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    text/x-python, zip
    Updated Dec 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liane Vogel; Liane Vogel; Jan-Micha Bodensohn; Jan-Micha Bodensohn; Carsten Binnig; Carsten Binnig (2024). WikiDBs - A Large-Scale Corpus Of Relational Databases From Wikidata [Dataset]. http://doi.org/10.5281/zenodo.11559814
    Explore at:
    zip, text/x-pythonAvailable download formats
    Dataset updated
    Dec 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Liane Vogel; Liane Vogel; Jan-Micha Bodensohn; Jan-Micha Bodensohn; Carsten Binnig; Carsten Binnig
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    WikiDBs is an open-source corpus of 100,000 relational databases. We aim to support research on tabular representation learning on multi-table data. The corpus is based on Wikidata and aims to follow certain characteristics of real-world databases.

    WikiDBs was published as a spotlight paper at the Dataset & Benchmarks track at NeurIPS 2024.

    WikiDBs contains the database schemas, as well as table contents. The database tables are provided as CSV files, and each database schema as JSON. The 100,000 databases are available in five splits, containing 20k databases each. In total, around 165 GB of disk space are needed for the full corpus. We also provide a script to convert the databases into SQLite.

  7. d

    Data Repository for \"Open Source Software as Digital Platforms to...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Oct 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Petralia, Sergio (2025). Data Repository for \"Open Source Software as Digital Platforms to Innovate\" [Dataset]. http://doi.org/10.7910/DVN/UQNVHF
    Explore at:
    Dataset updated
    Oct 29, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Petralia, Sergio
    Description

    This dataverse hosts the data repository of the article entitled "Open Source Software as Digital Platforms to Innovate" . It contains databases and R software codes that replicate the main results of the article. The article contains a detailed description of how these databases were constructed and how they are organized.

  8. m

    KeyNet: An Open Source Dataset of Key Bittings

    • data.mendeley.com
    • zenodo.org
    Updated Mar 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Ke (2023). KeyNet: An Open Source Dataset of Key Bittings [Dataset]. http://doi.org/10.17632/spth99fm4c.1
    Explore at:
    Dataset updated
    Mar 20, 2023
    Authors
    Alexander Ke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository introduces a dataset of obverse and reverse images of 319 unique Schlage SC1 keys, labeled with each key's bitting code. We make our data accessible in an HDF5 format, through arrays aligned where the Nth index of each array represents the Nth key, with keys sorted ascending by bitting code: /bittings: Each keys 1-9 bitting code, recorded from shoulder through the tip of the key, uint8 of shape (319, 5). /obverse: Obverse image of each key, uint8 of shape (319, 512, 512, 3). /reverse: Reverse image of each key, uint8 of shape (319, 512, 512, 3).

    Full dataset details available on GitHub https://github.com/alexxke/keynet

  9. Badger

    • kaggle.com
    zip
    Updated Mar 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Terry Eppler (2025). Badger [Dataset]. https://www.kaggle.com/datasets/terryeppler/badger/discussion?sort=undefined
    Explore at:
    zip(325078128 bytes)Available download formats
    Dataset updated
    Mar 16, 2025
    Authors
    Terry Eppler
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Data sources for Badger an open source budget execution & data analysis tool for federal budget analysts with the environmental protection agency based on WPF, Net 6, and is written in C#.

    ⚙️Features

    • Multiple data providers.
    • Datasets can be found on Kaggle
    • Charting and reporting.
    • Internal web browser, Baby, with queries optimized for searching .gov domains.
    • Pre-defined schema for more than 100 environmental data models.
    • Editors for SQLite, SQL Compact Edition, MS Access, SQL Server Express.
    • Excel-ish UI on top of a real databases.
    • Mapping for congressional earmark reporting and monitoring of pollution sites.
    • Financial data bound to environmental programs and statutory authority.
    • Ad-hoc calculations.
    • Add agency/region/division-specific branding.
    • The Winforms version of Badger is Sherpa

    📦 Database Providers

    Databases play a critical role in environmental data analysis by providing a structured system to store, organize, and efficiently retrieve large amounts of data, allowing analysts to easily access and manipulate information needed to extract meaningful insights through queries and analysis tools; essentially acting as the central repository for data used in data analysis processes. Badger provides the following providers to store and analyze data locally.

    • SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine.
    • SQL CE is a discontinued but still useful relational database produced by Microsoft for applications that run on mobile devices and desktops.
    • SQL Server Express Edition is a scaled down, free edition of SQL Server, which includes the core database engine.
    • MS Access is a database management system (DBMS) from Microsoft that combines the relational Access Database Engine (ACE) with a graphical user interface and software-development tools. more here

    💻 System requirements

    • You need VC++ 2019 Runtime 32-bit and 64-bit versions
    • You will need .NET 8.
    • You need to install the version of VC++ Runtime that Baby Browser needs. Since we are using CefSharp 106, according to this we need the above versions

    📚Documentation

    📝 Code

    • Controls - main UI layer with numerous controls and related functionality.
    • Styles - XAML-based styles for the Badger UI layer.
    • Enumerations - various enumerations used for budgetary accounting.
    • Extensions- useful extension methods for budget analysis by type.
    • Clients - other tools used and available.
    • Ninja - models used in EPA budget data analysis.
    • IO - input output classes used for networking and the file system.
    • Static - static types used in the analysis of environmental budget data.
    • Interfaces - abstractions used in the analysis of environmental budget data.
    • bin - Binaries are included in the bin folder due to the complex Baby setup required. Don't empty this folder.
    • Badger uses CefSharp 106 for Baby Browser and is built on NET 8
    • Badger supports x64 specific builds
    • bin/storage - HTML and JS required for downloads manager and custom error pages _

    Dashboards

    Environmental...

  10. Database Infrastructure for Mass Spectrometry - Per- and Polyfluoroalkyl...

    • nist.gov
    • data.nist.gov
    • +1more
    Updated Jul 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2023). Database Infrastructure for Mass Spectrometry - Per- and Polyfluoroalkyl Substances [Dataset]. http://doi.org/10.18434/mds2-2905
    Explore at:
    Dataset updated
    Jul 5, 2023
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    License

    https://www.nist.gov/open/licensehttps://www.nist.gov/open/license

    Description

    Data here contain and describe an open-source structured query language (SQLite) portable database containing high resolution mass spectrometry data (MS1 and MS2) for per- and polyfluorinated alykl substances (PFAS) and associated metadata regarding their measurement techniques, quality assurance metrics, and the samples from which they were produced. These data are stored in a format adhering to the Database Infrastructure for Mass Spectrometry (DIMSpec) project. That project produces and uses databases like this one, providing a complete toolkit for non-targeted analysis. See more information about the full DIMSpec code base - as well as these data for demonstration purposes - at GitHub (https://github.com/usnistgov/dimspec) or view the full User Guide for DIMSpec (https://pages.nist.gov/dimspec/docs). Files of most interest contained here include the database file itself (dimspec_nist_pfas.sqlite) as well as an entity relationship diagram (ERD.png) and data dictionary (DIMSpec for PFAS_1.0.1.20230615_data_dictionary.json) to elucidate the database structure and assist in interpretation and use.

  11. Data from: TerraDS: A Dataset for Terraform HCL Programs

    • zenodo.org
    • data-staging.niaid.nih.gov
    application/gzip, bin
    Updated Nov 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christoph Bühler; Christoph Bühler; David Spielmann; David Spielmann; Roland Meier; Roland Meier; Guido Salvaneschi; Guido Salvaneschi (2024). TerraDS: A Dataset for Terraform HCL Programs [Dataset]. http://doi.org/10.5281/zenodo.14217386
    Explore at:
    application/gzip, binAvailable download formats
    Dataset updated
    Nov 27, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christoph Bühler; Christoph Bühler; David Spielmann; David Spielmann; Roland Meier; Roland Meier; Guido Salvaneschi; Guido Salvaneschi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    TerraDS

    The TerraDS dataset provides a comprehensive collection of Terraform programs written in the HashiCorp Configuration Language (HCL). As Infrastructure as Code (IaC) gains popularity for managing cloud infrastructure, Terraform has become one of the leading tools due to its declarative nature and widespread adoption. However, a lack of publicly available, large-scale datasets has hindered systematic research on Terraform practices. TerraDS addresses this gap by compiling metadata and source code from 62,406 open-source repositories with valid licenses. This dataset aims to foster research on best practices, vulnerabilities, and improvements in IaC methodologies.

    Structure of the Database

    The TerraDS dataset is organized into two main components: a SQLite database containing metadata and an archive of source code (~335 MB). The metadata, captured in a structured format, includes information about repositories, modules, and resources:

    1. Repository Data:

    • Contains 62,406 repositories with fields such as repository name, creation date, star count, and permissive license details.
    • Provides cloneable URLs for access and analysis.
    • Tracks additional metrics like repository size and the latest commit details.

    2. Module Data:

    • Consists of 279,344 modules identified within the repositories.
    • Each module includes its relative path, referenced providers, and external module calls stored as JSON objects.

    3. Resource Data:

    • Encompasses 1,773,991 resources, split into managed (1,484,185) and data (289,806) resources.
    • Each resource entry details its type, provider, and whether it is managed or read-only.

    Structure of the Archive

    The provided archive contains the source code of the 62,406 repositories to allow further analysis based on the actual source instead of the metadata only. As such, researcher can access the permissive repositories and conduct studies on the executable HCL code.

    Tools

    The "HCL Dataset Tools" file contains a snapshot of the https://github.com/prg-grp/hcl-dataset-tools repository - for long term archival reasons. The tools in this repository can be used to reproduce this dataset.

    One of the tools - "RepositorySearcher" - can be used to fetch metadata for various other GitHub API queries, not only Terraform code. While the RepositorySearcher allows usage for other types of repository search, the other tools provided are focused on Terraform repositories.

  12. IPinfo - IP to Country and ASN Data

    • kaggle.com
    zip
    Updated Nov 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IPinfo (2025). IPinfo - IP to Country and ASN Data [Dataset]. https://www.kaggle.com/datasets/ipinfo/ipinfo-country-asn/code
    Explore at:
    zip(41717241 bytes)Available download formats
    Dataset updated
    Nov 27, 2025
    Authors
    IPinfo
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    IPinfo IP to Country ASN database

    IPinfo's IP to Country ASN database is an open-access database that provides information on the country and ASN (Autonomous System Number) of a given IP address.

    • It offers full accuracy and is updated daily.
    • The database is licensed under CC-BY-SA 4.0, allowing for commercial usage.
    • It includes both IPv4 and IPv6 addresses.
    • There are two file formats available: CSV and MMDB.

    Notebook

    Please explore the provided notebook to learn about the dataset:

    🔗 IPinfo IP to Country ASN Demo Notebook for Kaggle

    Documentation

    Detailed documentation for the IP to Country ASN database can be found on IPinfo's documentation page. Database samples are also available on IPinfo's GitHub repo.

    🔗 Documentation: https://ipinfo.io/developers/ip-to-country-asn-database

    Field NameExampleDescription
    start_ip194.87.139.0The starting IP address of an IP address range
    end_ip194.87.139.255The ending IP address of an IP address range
    countryNLThe ISO 3166 country code of the location
    country_nameNetherlandsThe name of the country
    continentEUThe continent code of the country
    continent_nameEuropeThe name of the continent
    asnAS1239The Autonomous System Number
    as_nameSprintThe name of the AS (Autonomous System) organization
    as_domainsprint.netThe official domain or website of the AS organization

    Context and value

    The IPinfo IP to Country ASN database is a subset of IPinfo's IP to Geolocation database and the ASN database.

    The database provides daily updates, complete IPv4 and IPv6 coverage, and full accuracy, just like its parent databases. The database is crucial for:

    • Cybersecurity and threat intelligence
    • Open Source Intelligence (OSINT)
    • Firewall policy configuration
    • Sales intelligence
    • Marketing analytics and adtech
    • Personalized user experience

    Whether you are running a web service or a server connected to the internet, this enterprise-ready database should be part of your tech stack.

    Usage

    In this dataset, we include 3 files:

    • country_asn.csv → For reverse IP look-ups and running IP-based analytics
    • country_asn.mmdb → For IP address information look-ups
    • ips.txt → Sample IP addresses

    Using the CSV dataset

    As the CSV dataset has a relatively small size (~120 MB), any dataframe and database should be adequate. However, we recommend users not use the CSV file for IP address lookups. For everything else, feel free to explore the CSV file format.

    Using the MMDB dataset

    The MMDB dataset requires a special third-party library called the MMDB reader library. The MMDB reader library enables you to look up IP addresses at the most efficient speed possible. However, as this is a third-party library, you should install it via pip install in your notebook, which requires an internet connection to be enabled in your notebook settings.

    Please see our attached demo notebook for usage examples.

    IP to Country ASN provides many diverse solutions, so we encourage and share those ideas with the Kaggle community!

    Sources

    The geolocation data is produced by IPinfo's ProbeNet, a globe-spanning probe network infrastructure with 400+ servers. The ASN data is collected from public datasets like WHOIS, Geofeed etc. The ASN data is later parsed and structured to make it more data-friendly.

    See the Data Provenance section below to learn more.

    Please note that this Kaggle Dataset is not updated daily. We recommend users download our free IP to Country ASN database from IPinfo's website directly for daily updates.

    Terminology

    AS Organization - An AS (Autonomous System) organization is an organization that owns a block or range of IP addresses. These IP addresses are sold to them by the Regional Internet Organizations (RIRs). Even though this AS organization may own an IP address, they sometimes do not operate IP addresses directly and may rent them out to other organizations. You can check out our IP to Company data or ASN database to learn more about them.

    ASN - ASN or Autonomous System Number is the unique identifying number assigned to an AS organization.

    IP to ASN - Get ASN and AS organizat...

  13. f

    Data from: glypy: An Open Source Glycoinformatics Library

    • acs.figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joshua Klein; Joseph Zaia (2023). glypy: An Open Source Glycoinformatics Library [Dataset]. http://doi.org/10.1021/acs.jproteome.9b00367.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    ACS Publications
    Authors
    Joshua Klein; Joseph Zaia
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Glycoinformatics is a critical resource for the study of glycobiology, and glycobiology is a necessary component for understanding the complex interface between intra- and extracellular spaces. Despite this, there is limited software available to scientists studying these topics, requiring each to create fundamental data structures and representations anew for each of their applications. This leads to poor uptake of standardization and loss of focus on the real problems. We present glypy, a library written in Python for reading, writing, manipulating, and transforming glycans at several levels of precision. In addition to understanding several common formats for textual representation of glycans, the library also provides application programming interfaces (APIs) for major community databases, including GlyTouCan and UnicarbKB. The library is freely available under the Apache 2 common license with source code available at https://github.com/mobiusklein/ and documentation at https://glypy.readthedocs.io/.

  14. g

    Coronavirus COVID-19 Global Cases by the Center for Systems Science and...

    • github.com
    • systems.jhu.edu
    • +1more
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE), Coronavirus COVID-19 Global Cases by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU) [Dataset]. https://github.com/CSSEGISandData/COVID-19
    Explore at:
    Dataset provided by
    Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE)
    Area covered
    Global
    Description

    2019 Novel Coronavirus COVID-19 (2019-nCoV) Visual Dashboard and Map:
    https://www.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6

    • Confirmed Cases by Country/Region/Sovereignty
    • Confirmed Cases by Province/State/Dependency
    • Deaths
    • Recovered

    Downloadable data:
    https://github.com/CSSEGISandData/COVID-19

    Additional Information about the Visual Dashboard:
    https://systems.jhu.edu/research/public-health/ncov

  15. Z

    MetaSBT Viruses database

    • data.niaid.nih.gov
    • zenodo.org
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cumbo, Fabio; Blankenberg, Daniel (2025). MetaSBT Viruses database [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7786393
    Explore at:
    Dataset updated
    Mar 25, 2025
    Dataset provided by
    Center for Computational Life Sciences, Lerner Research Institute, Cleveland Clinic Foundation
    Authors
    Cumbo, Fabio; Blankenberg, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    MetaSBT database with viral metagenome-assembled genomes (MAGs) from the MGV database.

    It comprises 26,285 reference genomes and 190,756 MAGs organised into 40,729 species, 13,916 genera, 8,862 families, 6,551 orders, 3,014 classes, and 17 phyla.

    MetaSBT public databases are indexed in the MetaSBT-DBs repository on GitHub at https://github.com/cumbof/MetaSBT-DBs and they are produced with the open-source MetaSBT framework available at https://github.com/cumbof/MetaSBT.

    Databases can be installed locally with the unpack command of MetaSBT as documented in the official wiki at https://github.com/cumbof/MetaSBT/wiki.

    Other commands to interact with the database are available through the MetaSBT framework and are documented in the same wiki.

  16. Bhagavad Gita API Database

    • kaggle.com
    zip
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pt. Prashant Tripathi (2024). Bhagavad Gita API Database [Dataset]. https://www.kaggle.com/datasets/ptprashanttripathi/bhagavad-gita-api-database
    Explore at:
    zip(9412146 bytes)Available download formats
    Dataset updated
    Jul 9, 2024
    Authors
    Pt. Prashant Tripathi
    License

    http://www.gnu.org/licenses/lgpl-3.0.htmlhttp://www.gnu.org/licenses/lgpl-3.0.html

    Description

    Bhagavad Gita Translations and Commentary Dataset

    Bhagavad Gita APIhttps://repository-images.githubusercontent.com/314205765/0bb18d80-2b22-11eb-8f6f-ccf20c0c2679">

    Dataset Contents:

    This dataset compiles translations and commentaries of the Bhagavad Gita, an ancient Indian scripture, provided by various authors. The Bhagavad Gita is a 700-verse Hindu scripture that is part of the Indian epic Mahabharata. It is revered for its philosophical and spiritual teachings.

    The dataset includes translations and commentaries in different languages, such as Sanskrit, Hindi, English, and more. It features the insights and interpretations of renowned authors and scholars who have contributed to the understanding of the Bhagavad Gita's teachings. The dataset encompasses multiple dimensions of the scripture, including translations, transliterations, commentaries, and explanations.

    Featured Authors:

    • Swami Tejomayananda
    • Swami Sivananda
    • Shri Purohit Swami
    • Swami Chinmayananda
    • Dr. S. Sankaranarayan
    • Swami Adidevananda
    • Swami Gambirananda
    • Shri Madhavacharya
    • Shri Anandgiri
    • Swami Ramsukhdas
    • Shri Ramanuja
    • Shri Abhinav Gupta
    • Shri Shankaracharya
    • Shri Jayatritha
    • Shri Vallabhacharya
    • Shri M. Saraswati
    • Shri Shridhara Swami
    • Shri Dhanpati
    • Vedantadeshika
    • Shri Purushottamji
    • Shri Neelkanth

    Bhagavad Gita API:

    In addition to the dataset, an API named the Bhagavad Gita API has been developed to provide easy access to the Bhagavad Gita's verses, translations, and commentaries. This API allows developers and enthusiasts to access the teachings of the Bhagavad Gita programmatically. The API can be accessed at https://bhagavadgitaapi.in/.

    API Source Code:

    The source code for the Bhagavad Gita API is available on GitHub at https://github.com/vedicscriptures/bhagavad-gita-api. It provides an open-source resource for those interested in contributing or understanding how the API works.

  17. TractoInferno: A large-scale, open-source, multi-site database for machine...

    • openneuro.org
    Updated Aug 16, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Philippe Poulin; Guillaume Theaud; Pierre-Marc Jodoin; Maxime Descoteaux (2022). TractoInferno: A large-scale, open-source, multi-site database for machine learning dMRI tractography [Dataset]. http://doi.org/10.18112/openneuro.ds003900.v1.1.1
    Explore at:
    Dataset updated
    Aug 16, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Philippe Poulin; Guillaume Theaud; Pierre-Marc Jodoin; Maxime Descoteaux
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    TractoInferno Machine Learning Tractography Dataset

    The /derivatives folder contains the pre-split training/validation/testing datasets, each containing unique subjects with the following:

    • T1W image
    • DTI metrics maps (FA/AD/MD/RD)
    • DWI image with bval/bvec
    • fodf map + fodf peaks
    • White matter/grey matter/csf masks
    • DWI SH map (SH of order 6 fitted to the DWI signal, using the descoteaux07 basis: https://dipy.org/documentation/1.3.0./theory/sh_basis/)
    • Tractograms of the following delineated bundles
      • AF_L
      • AF_R
      • CC_Fr_1
      • CC_Fr_2
      • CC_Oc
      • CC_Pa
      • CC_Pr_Po
      • CG_L
      • CG_R
      • FAT_L
      • FAT_R
      • FPT_L
      • FPT_R
      • IFOF_L
      • IFOF_R
      • ILF_L
      • ILF_R
      • MCP
      • MdLF_L
      • MdLF_R
      • OR_ML_L
      • OR_ML_R
      • POPT_L
      • POPT_R
      • PYT_L
      • PYT_R
      • SLF_L
      • SLF_R
      • UF_L
      • UF_R

    All tractograms contain compressed streamlines to reduce disk space, which means that the step size is variable. If your algorithm requires a fixed step size, you have to manually resample the streamlines, which can be done using SCILPY (https://github.com/scilus/scilpy) and the scil_resample_streamlines.py script: https://github.com/scilus/scilpy/blob/master/scripts/scil_resample_streamlines.py

    To evaluate a candidate tractogram, refer to: https://github.com/scil-vital/TractoInferno/

  18. H

    Supporting data and tools for "An open source cyberinfrastructure for...

    • beta.hydroshare.org
    • hydroshare.org
    • +1more
    zip
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Camilo J. Bastidas Pacheco; Jeffery S. Horsburgh; Juan Caraballo; Nour Attallah (2023). Supporting data and tools for "An open source cyberinfrastructure for collecting, processing, storing and accessing high temporal resolution residential water use data" [Dataset]. http://doi.org/10.4211/hs.aaa7246437144f2390411ef9f2f4ebd0
    Explore at:
    zip(20.8 MB)Available download formats
    Dataset updated
    Apr 17, 2023
    Dataset provided by
    HydroShare
    Authors
    Camilo J. Bastidas Pacheco; Jeffery S. Horsburgh; Juan Caraballo; Nour Attallah
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2021 - Jan 31, 2021
    Area covered
    Description

    The files provided here are the supporting data and code files for the analyses presented in "An open source cyberinfrastructure for collecting, processing, storing and accessing high temporal resolution residential water use data," an article in Environmental Modelling and Software (https://doi.org/10.1016/j.envsoft.2021.105137). The data included in this resource were processed using the Cyberinfrastructure for Intelligent Water Supply (CIWS) (https://github.com/UCHIC/CIWS-Server), and collected using the CIWS-Node (https://github.com/UCHIC/CIWS-WM-Node) data logging device. CIWS is an open-source, modular, generalized architecture designed to automate the process from data collection to analysis and presentation of high temporal residential water use data. The CIWS-Node is a low cost device capable of collecting this type of data on magnetically driven water meters. The code included allows replication of the analyses presented in the journal paper, and the raw data included allow for extension of the analyses conducted. The journal paper presents the architecture design and a prototype implementation for CIWS that was built using existing open-source technologies, including smart meters, databases, and services. Two case studies were selected to test functionalities of CIWS, including push and pull data models within single family and multi-unit residential contexts, respectively. CIWS was tested for scalability and performance within our design constraints and proved to be effective within both case studies. All CIWS elements and the case study data described are freely available for re-use.

  19. CVEfixes Dataset: Automatically Collected Vulnerabilities and Their Fixes...

    • zenodo.org
    • data-staging.niaid.nih.gov
    • +1more
    zip
    Updated Sep 10, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guru Bhandari; Guru Bhandari; Amara Naseer; Amara Naseer; Leon Moonen; Leon Moonen (2022). CVEfixes Dataset: Automatically Collected Vulnerabilities and Their Fixes from Open-Source Software [Dataset]. http://doi.org/10.5281/zenodo.4476564
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 10, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Guru Bhandari; Guru Bhandari; Amara Naseer; Amara Naseer; Leon Moonen; Leon Moonen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CVEfixes is a comprehensive vulnerability dataset that is automatically collected and curated from Common Vulnerabilities and Exposures (CVE) records in the public U.S. National Vulnerability Database (NVD). The goal is to support data-driven security research based on source code and source code metrics related to fixes for CVEs in the NVD by providing detailed information at different interlinked levels of abstraction, such as the commit-, file-, and method level, as well as the repository- and CVE level.

    At the initial release, the dataset covers all published CVEs up to 9 June 2021. All open-source projects that were reported in CVE records in the NVD in this time frame and had publicly available git repositories were fetched and considered for the construction of this vulnerability dataset. The dataset is organized as a relational database and covers 5495 vulnerability fixing commits in 1754 open source projects for a total of 5365 CVEs in 180 different Common Weakness Enumeration (CWE) types. The dataset includes the source code before and after fixing of 18249 files, and 50322 functions.

    This repository includes the SQL dump of the dataset, as well as the JSON for the CVEs and XML of the CWEs at the time of collection. The complete process has been documented in the paper "CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software", which is published in the Proceedings of the 17th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE '21). You will find a copy of the paper in the Doc folder.

    Citation and Zenodo links

    Please cite this work by referring to the published paper:

    • Guru Bhandari, Amara Naseer, and Leon Moonen. 2021. CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software. In Proceedings of the 17th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE '21). ACM, 10 pages. https://doi.org/10.1145/3475960.3475985
    @inproceedings{bhandari2021:cvefixes,
      title = {{CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software}},
      booktitle = {{Proceedings of the 17th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE '21)}},
      author = {Bhandari, Guru and Naseer, Amara and Moonen, Leon},
      year = {2021},
      pages = {10},
      publisher = {{ACM}},
      doi = {10.1145/3475960.3475985},
      copyright = {Open Access},
      isbn = {978-1-4503-8680-7},
      language = {en}
    }

    The dataset has been released on Zenodo with DOI:10.5281/zenodo.4476563. The GitHub repository containing the code to automatically collect the dataset can be found at https://github.com/secureIT-project/CVEfixes, released with DOI:10.5281/zenodo.5111494.

  20. Global Power Plant Database - Datasets - Data | World Resources Institute

    • old-datasets.wri.org
    Updated Jun 3, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wri.org (2021). Global Power Plant Database - Datasets - Data | World Resources Institute [Dataset]. https://old-datasets.wri.org/dataset/globalpowerplantdatabase
    Explore at:
    Dataset updated
    Jun 3, 2021
    Dataset provided by
    World Resources Institutehttps://www.wri.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Global Power Plant Database is a comprehensive, open source database of power plants around the world. It centralizes power plant data to make it easier to navigate, compare and draw insights for one’s own analysis. The database covers approximately 35,000 power plants from 167 countries and includes thermal plants (e.g. coal, gas, oil, nuclear, biomass, waste, geothermal) and renewables (e.g. hydro, wind, solar). Each power plant is geolocated and entries contain information on plant capacity, generation, ownership, and fuel type. It will be continuously updated as data becomes available. The methodology for the dataset creation is given in the World Resources Institute publication "A Global Database of Power Plants". Data updates may occur without associated updates to this manuscript. The database can be visualized on Resource Watch together with hundreds of other datasets. The database is available for immediate download and use through the WRI Open Data Portal. Associated code for the creation of the dataset can be found on GitHub. The bleeding-edge version of the database (which may contain substantial differences from the release you are viewing) is available on GitHub as well. To be informed of important database releases in the future, please sign up for our newsletter.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Computer Incident Response Center Luxembourg (2025). Geo Open - IP address geolocation per country in MMDB format [Dataset]. https://data.public.lu/en/datasets/61f12bb8a2a4fae49573cbbc/?resources=all

Geo Open - IP address geolocation per country in MMDB format

geo-open-ip-address-geolocation-per-country-in-mmdb-format

Explore at:
mmdb(75559274), mmdb(11369776), mmdb(10432566), mmdb(75509660), mmdb(10783528), mmdb(10659330), mmdb(67213033), mmdb(9664234), mmdb(9370424), mmdb(68142342), mmdb(9796548), mmdb(10344182), mmdb(10610674), mmdb(74874333), mmdb(10660474), mmdb(9428898), mmdb(10417286), mmdb(69501170), mmdb(10311426), mmdb(10782930), mmdb(9426578), mmdb(10329550), mmdb(10363578), mmdb(9403704), mmdb(74657407), mmdb(9413064), mmdb(73952250), mmdb(67457616), mmdb(10654162), mmdb(9419858), mmdb(74061548), mmdb(67299670), mmdb(74731206), mmdb(9491426), mmdb(72980863), mmdb(9273480), mmdb(9396216), mmdb(10128894), mmdb(10187610), mmdb(9276216), mmdb(9491850), mmdb(10607002), mmdb(9964606), mmdb(9466514), mmdb(70013903), mmdb(72597995), mmdb(9403256), mmdb(9285408), mmdb(73364057), mmdb(69153266), mmdb(10195478), mmdb(9406664), mmdb(10028790), mmdb(71966175), mmdb(10611870), mmdb(10097478), mmdb(9279800), mmdb(9414904), mmdb(74047220), mmdb(9387648), mmdb(10620410), mmdb(74365402), mmdb(73895303), mmdb(67315508), mmdb(9280472), mmdb(10677498), mmdb(67885793), mmdb(74430269), mmdb(9326208), mmdb(71630350), mmdb(73479884), mmdb(71821414), mmdb(10562402), mmdb(10519550), mmdb(9901084), mmdb(9514250), mmdb(10600914), mmdb(10530214), mmdb(9444026), mmdb(73865542), mmdb(71293535), mmdb(9276592), mmdb(9269824), mmdb(10725746), mmdb(67243969), mmdb(10668106), mmdb(74201290), mmdb(9349608), mmdb(10263266), mmdb(9843284), mmdb(74204542), mmdb(9303264), mmdb(73007493), mmdb(9801012), mmdb(10278862), mmdb(9688864), mmdb(10314146), mmdb(75278387), mmdb(9367200), mmdb(71989592), mmdb(74939267), mmdb(74587520), mmdb(10734586), mmdb(73944255), mmdb(10642342), mmdb(72120440), mmdb(10153102), mmdb(74196517), mmdb(69095853), mmdb(11129464), mmdb(77816310), mmdb(77672856), mmdb(11121464), mmdb(10382730), mmdb(9277856), mmdb(10692986), mmdb(9370624), mmdb(9941900), mmdb(10754226), mmdb(72352123), mmdb(9425722), mmdb(70514489), mmdb(10535506), mmdb(9398168), mmdb(9375064), mmdb(71529791), mmdb(10211558), mmdb(74326507), mmdb(9640090), mmdb(9348184), mmdb(10628502), mmdb(68387114), mmdb(75221957), mmdb(10734970), mmdb(72865721), mmdb(73027607), mmdb(69443906), mmdb(72376809), mmdb(70109053), mmdb(10613794), mmdb(10289042), mmdb(10107878), mmdb(72475214), mmdb(9547964), mmdb(10163366), mmdb(10249894), mmdb(70898406), mmdb(10266040), mmdb(10223050), mmdb(10457114), mmdb(67319586), mmdb(73280284), mmdb(10364718), mmdb(9867892), mmdb(9273232), mmdb(72628081), mmdb(77810290), mmdb(11101352), mmdb(79155419), mmdb(79511067)Available download formats
Dataset updated
Nov 5, 2025
Dataset authored and provided by
Computer Incident Response Center Luxembourg
License

Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically

Description

Geo Open is an IP address geolocation per country in MMDB format. The database can be used as a replacement for software using the MMDB format. Information about MMDB format: https://maxmind.github.io/MaxMind-DB/ Open source server using Geo Open: https://github.com/adulau/mmdb-server Open source library to read MMDB file: https://github.com/maxmind/MaxMind-DB-Reader-python Historical dataset: https://cra.circl.lu/opendata/geo-open/ The database is automatically generated from public BGP AS announces matching the country code. The precision is at country level.

Search
Clear search
Close search
Google apps
Main menu