69 datasets found
  1. nLDE SPARQL engine: computing diefficiency metrics based on answer traces...

    • springernature.figshare.com
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maribel Acosta; Maria-Esther Vidal; York Sure-Vetter (2023). nLDE SPARQL engine: computing diefficiency metrics based on answer traces and query processing performance benchmarking [Dataset]. http://doi.org/10.6084/m9.figshare.5255686
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Maribel Acosta; Maria-Esther Vidal; York Sure-Vetter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains results of various metric tests performed in the SPARQL query engine nLDE: the network of Linked Data Eddies, in different configurations. The queries themselves are available via the nLDE website and tests are explained in depth within the associated publication.To compute the diefficiency metrics dief@t and dief@k, we need the answer trace produced by the SPARQL query engines when executing queries. Answer traces record the exact point in time when an engine produces an answer when executing a query.We executed SPARQL queries using three different configurations of the nLDE engine: Selective, NotAdaptive, Random. The resulting answer trace for each query execution is stored in the CSV file nLDEBenchmark1AnswerTrace.csv. The structure of this file is as follows: query: id of the query executed. Example: 'Q9.sparql'approach: name of the approach (or engine) used to execute the query.tuple: the value i indicates that this row corresponds to the ith answer produced by approach when executing query.time: elapsed time (in seconds) since approach started the execution of query until the answer i is produced.In addition, to compare the performance of the nLDE engine using the metrics dief@t and dief@k as well as conventional metrics used in the query processing literature, such as: execution time, time for the first tuple, and number of answers produced. We measured the performance of the nLDE engine using conventional metrics. The results are available at the CSV file inLDEBenchmark1Metrics. The structure of this CSV file is as follows:query: id of the query executed. Example: 'Q9.sparql'approach: name of the approach (or engine) used to execute the query.tfft: time (in seconds) required by approach to produce the first tuple when executing query.totaltime: elapsed time (in seconds) since approach started the execution of query until the last answer of query is produced.comp: number of answers produced by approach when executing query.

  2. c

    ckanext-htsql

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-htsql [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-htsql
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The HTSQL Interface extension for CKAN enhances data access by enabling users to query the CKAN datastore using the powerful HTSQL query language. This provides an alternative to the standard CKAN API, offering more flexibility and expressiveness in data retrieval. By adding a new API endpoint, datastoresearchhtsql, the extension allows for complex data manipulations and selections directly within the CKAN environment. Key Features: HTSQL Query Endpoint: Introduces a dedicated API endpoint (datastoresearchhtsql) to execute HTSQL queries against the datastore. Enhanced Data Retrieval: Enables users to perform more sophisticated data filtering, aggregation, and transformation compared to the standard CKAN datastore search API. Datastore Integration: Leverages CKAN's datastore functionality, allowing HTSQL queries on data resources stored within CKAN. Installation Simplicity: Installs as a standard CKAN extension through pip and is activated via the CKAN configuration file. Technical Integration: The HTSQL extension integrates tightly with the CKAN datastore by adding the datastoresearchhtsql API endpoint. To implement, one would install it via pip, followed by adding htsql to the ckan.plugins line in the CKAN .ini configuration file. This activates the extension and makes the HTSQL query functionality available. Benefits & Impact: The HTSQL Interface extension provides CKAN users with a more powerful and versatile way to query data stored in the datastore. This enhanced query capability can lead to more efficient data analysis, reporting, and application development by easing the complexity of data requests. By providing an alternative to the standard API, the extension offers greater control and flexibility in extracting valuable insights from data resources.

  3. H

    Physical Properties of Rivers: Querying Metadata and Discharge Data

    • hydroshare.org
    zip
    Updated Jan 29, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gabriela Garcia; Kateri Salk (2021). Physical Properties of Rivers: Querying Metadata and Discharge Data [Dataset]. https://www.hydroshare.org/resource/20dc4af8451e44b3950b182a8f506296
    Explore at:
    zip(1.7 MB)Available download formats
    Dataset updated
    Jan 29, 2021
    Dataset provided by
    HydroShare
    Authors
    Gabriela Garcia; Kateri Salk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Physical Properties of Rivers: Querying Metadata and Discharge Data

    This lesson was adapted from educational material written by Dr. Kateri Salk for her Fall 2019 Hydrologic Data Analysis course at Duke University. This is the second part of a two-part exercise focusing on the physical properties of rivers.

    Introduction

    Rivers are bodies of freshwater flowing from higher elevations to lower elevations due to the force of gravity. One of the most important physical characteristics of a stream or river is discharge, the volume of water moving through the river or stream over a given amount of time. Discharge can be measured directly by measuring the velocity of flow in several spots in a stream and multiplying the flow velocity over the cross-sectional area of the stream. However, this method is effort-intensive. This exercise will demonstrate how to approximate discharge by developing a rating curve for a stream at a given sampling point. You will also learn to query metadata from and compare discharge patterns in climatically different regions of the United States.

    Learning Objectives

    After successfully completing this exercise, you will be able to:

    1. Execute queries to pull a variety of National Water Information System (NWIS) and Water Quality Portal (WQP) data into R.
    2. Analyze seasonal and interannual characteristics of stream discharge and compare discharge patterns in different regions of the United States
  4. c

    ckanext-sql

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-sql [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-sql
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    ckanext-sql Due to the absence of a README file in the provided GitHub repository for ckanext-sql, a comprehensive understanding of its features, integration, and benefits is unfortunately not available. Typically, an extension named 'sql' would likely bridge CKAN with SQL databases, potentially enabling users to query and interact with datasets stored in SQL-compatible databases directly from within CKAN. However, lacking specific documentation, definitive claims about its capabilities cannot be accurately made. Potential Key Features (based on the name and typical use cases): * SQL Query Interface: Hypothetically, this extension might offer an interface within CKAN to run SQL queries against linked datasets. * Data Visualization from SQL: Potentially, it could allow generating visualizations directly from data retrieved via SQL queries. * SQL Data Import: It is possible that the extension could provide functionality to import data from SQL databases into CKAN datasets. * Federated Queries: Maybe, the extension implements capability of running federated queries across datasets store as CKAN resources and external databases. * SQL Data Export: Possibility of offering the ability to export CKAN data to a SQL database. * SQL based resource views: Speculatively add different views for resource showing data from SQL Potential Use Cases (based on the name): 1. Direct Data Analysis: Data analysts might use this to directly query and analyze data stored in SQL databases via CKAN, skipping manually importing the data. 2. Database Integration: Organizations that already have large databases of data could use this extension to provide easier access to this data through a CKAN portal. Technical Integration (Hypothetical): Given the name, the 'sql' extension likely integrates with CKAN by adding new API endpoints or UI elements that allow users to specify SQL connections and queries. It would probably require configuration settings to define database connection parameters. It might also integrate with CKAN's resource view system, enabling custom visualizations. Potential Benefits & Impact (Speculative): If the extension functions as expected by the name, it would offer direct access to SQL data within the CKAN environment, reduce the need for data duplication (by querying directly rather than importing), and potentially enhance data analysis and visualization capabilities. The extension could become an indispensable part of data analytic workflows involving CKAN. However, due to a lack of a README.md, this analysis remains at theoretical level.

  5. f

    250 Feasible Queries executed against 400M Triple DBpedia set with 16...

    • figshare.com
    application/gzip
    Updated Jul 9, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Felix Conrads (2016). 250 Feasible Queries executed against 400M Triple DBpedia set with 16 querying and 16 update users [Dataset]. http://doi.org/10.6084/m9.figshare.3475463.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jul 9, 2016
    Dataset provided by
    figshare
    Authors
    Felix Conrads
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Benchmark Execution with- 250 Feasible Queries- 400M Triple DBpedia Dataset- 16 querying user- 16 update user- 250 changesets

  6. R

    Dataset for common wheat (Triticum aestivum L.) grain and flour...

    • entrepot.recherche.data.gouv.fr
    bin
    Updated Feb 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patrice Buche; Patrice Buche (2025). Dataset for common wheat (Triticum aestivum L.) grain and flour characterization using classical and advanced analyses: raw and calculated analytical data SPARQL queries [Dataset]. http://doi.org/10.57745/EBBGE8
    Explore at:
    bin(4140), bin(3180), bin(13574), bin(9814), bin(13787), bin(3197), bin(9438), bin(2909), bin(9509), bin(3352), bin(7992), bin(4335), bin(2896), bin(5204), bin(6577), bin(4926), bin(11551), bin(7583), bin(3698), bin(10908), bin(10776), bin(6126), bin(8147), bin(10851), bin(3116), bin(3651), bin(9510), bin(3753)Available download formats
    Dataset updated
    Feb 28, 2025
    Dataset provided by
    Recherche Data Gouv
    Authors
    Patrice Buche; Patrice Buche
    License

    https://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html

    Dataset funded by
    Agence nationale de la recherche
    Description

    This dataset is composed of the 28 SPARQL queries executed to generate the measurement tables which are included in the files belonging to dataset containing the data tables results of the queries execution. They have the same name. They only differ by their extension. By example, CWG_reception_fallingNumber_raw.sparql is the file including the SPARQL query executed to obtain the table included in the file CWG_reception_fallingNumber_raw.tsv.

  7. S

    EXEC Tribal Consultation Memorandums

    • splitgraph.com
    Updated Jul 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    internal-open-piercecountywa-gov (2022). EXEC Tribal Consultation Memorandums [Dataset]. https://www.splitgraph.com/internal-open-piercecountywa-gov/exec-tribal-consultation-memorandums-khiz-822q
    Explore at:
    json, application/vnd.splitgraph.image, application/openapi+jsonAvailable download formats
    Dataset updated
    Jul 5, 2022
    Authors
    internal-open-piercecountywa-gov
    Description

    This dataset provides an annual count of Pierce County's tribal consultation memorandums.

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  8. Artifact for InsightQL: Advancing Human-Assisted Fuzzing with a Unified Code...

    • zenodo.org
    zip
    Updated May 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous Anonymous; Anonymous Anonymous (2025). Artifact for InsightQL: Advancing Human-Assisted Fuzzing with a Unified Code Database and Parameterized Query Interface [Dataset]. http://doi.org/10.5281/zenodo.15561719
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous Anonymous; Anonymous Anonymous
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    • Artifact for InsightQL: Advancing Human-Assisted Fuzzing with a Unified Code Database and Parameterized Query Interface
    • Contents
      • Extend-vscode-codeql.zip contains a VS code plugin implementation that is currently not runnable due to some hyperlinks being anonymised.
      • Code_database_backend.zip contains the implementation of helper functions for the VS code plugin
      • Artifact_Query.zip contains queries used in the evaluation section of the paper. Some of these queries are "debug" versions that run without running our VS Code plugin, others require interaction with the VS code plugin to run.
    • How is everything connected:
      • Build the extend-vscode-codeql as a VSCode plugin in VS code plugin developer host. (explained in README in the extend-vscode-codeql
      • To interact with actions described in the paper, VS Code plugin clone from where Code database backend is suppose to locate and build it. The stand-alone usage of the Backend is in the README of code_database_backend

  9. DBPSBv2 execution on 16 querying user

    • figshare.com
    pdf
    Updated Jul 10, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Felix Conrads (2016). DBPSBv2 execution on 16 querying user [Dataset]. http://doi.org/10.6084/m9.figshare.3474950.v4
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 10, 2016
    Dataset provided by
    figshare
    Authors
    Felix Conrads
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Collection of raw results, figures and gnumeric file for the figures of the execution of the DBPSBv2 queries with IGUANA on an 400M Triple Dbpedia dataset.- results_again.zip : raw results- results_dbpsb.gnumeric : gnumeric file to create the figures- dbpsb_16_qmph.pdf : Figure for 16 querying users and a comparison between 0 and 1 update user (Metric: Query Mixes Per Hour)- dbpsb_16_no-of-queries.pdf : Figure for 16 querying users and a comparison between 0 and 1 update user (Metric: No of queries Per Hour)- dbpsb_16-1_qps.pdf : Figure for 16 querying users and 1 update user (Metric: Query Per Second)- dbpsb_16-0_qps.pdf : Figure for 16 querying users and no update user (Metric: Query Per Second)

  10. c

    ckanext-graphql

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-graphql [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-graphql
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The graphql extension for CKAN introduces a GraphQL API endpoint, providing an alternative method to query CKAN data in addition to the existing Action API. Designed for CKAN instances running version 2.7 or later, this extension allows users to retrieve information about datasets, groups, and organizations using GraphQL queries. While still under development, it aims to offer a flexible way to access and manipulate CKAN data. Key Features: GraphQL Endpoint: Provides a /graphql endpoint on the CKAN instance to execute GraphQL queries. GraphiQL Integration: Includes GraphiQL, an in-browser IDE, for composing and testing GraphQL queries directly within the CKAN interface. Package Querying: Allows querying of packages, including related groups and organizations. Search Functionality: Supports searching for packages and groups based on specific terms. Extensible Schema: Intends to offer an interface to extend or customize the GraphQL schema from other CKAN extensions (e.g., to add custom models). Support for Mutations (Future): Plans to include the ability to modify data using GraphQL mutations. Technical Integration: The extension integrates with CKAN by adding a new plugin that exposes the GraphQL endpoint. Enabling the graphql plugin in the CKAN configuration file (.ini) makes the endpoint available. It intends to have schema customization options so other extensions can add their own models which would be done in a configuration file. Benefits & Impact: Utilizing the graphql extension can simplify data retrieval from CKAN by providing a standardized and queryable GraphQL interface. This allows users to request specific data they need, thereby reducing the amount of data transfer and improving performance. The extension aims to become a viable alternative to the CKAN Action API for data querying.

  11. S

    zipcode

    • splitgraph.com
    Updated Sep 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    roseville-ca-us (2024). zipcode [Dataset]. https://www.splitgraph.com/roseville-ca-us/zipcode-x8uc-ktyh/
    Explore at:
    application/vnd.splitgraph.image, json, application/openapi+jsonAvailable download formats
    Dataset updated
    Sep 4, 2024
    Authors
    roseville-ca-us
    Description

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  12. S

    IT- Survey Perf

    • splitgraph.com
    Updated Jun 5, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    performance-cityofcamas-us (2018). IT- Survey Perf [Dataset]. https://www.splitgraph.com/performance-cityofcamas-us/it-survey-perf-56sp-fesq/
    Explore at:
    json, application/vnd.splitgraph.image, application/openapi+jsonAvailable download formats
    Dataset updated
    Jun 5, 2018
    Authors
    performance-cityofcamas-us
    Description

    Survey results in key IT areas

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  13. S

    FY13 % of Completed Procurements on Target

    • splitgraph.com
    Updated Aug 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    performance-archive-cookcountyil-gov (2024). FY13 % of Completed Procurements on Target [Dataset]. https://www.splitgraph.com/performance-archive-cookcountyil-gov/fy13-of-completed-procurements-on-target-qded-jiv4/
    Explore at:
    application/openapi+json, application/vnd.splitgraph.image, jsonAvailable download formats
    Dataset updated
    Aug 9, 2024
    Authors
    performance-archive-cookcountyil-gov
    Description

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  14. Validation queries for the paper OKG-Soft: An Open Knowledge Graph with...

    • figshare.com
    txt
    Updated Aug 5, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Garijo (2019). Validation queries for the paper OKG-Soft: An Open Knowledge Graph with Machine Readable Scientific Software Metadata [Dataset]. http://doi.org/10.6084/m9.figshare.9249311.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Aug 5, 2019
    Dataset provided by
    figshare
    Authors
    Daniel Garijo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Validation queries for the paper OKG-Soft: An Open Knowledge Graph with Machine Readable Scientific Software Metadata. The queries are shown in SPARQL along with the results obtained when they were executed. Note that since the execution of these queries the vocabularies and knowledge graph may have slightly changed and small changes may be needed to execute the query.

  15. S

    Board of Review - Tax Year Statistics

    • splitgraph.com
    Updated Aug 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    performance-archive-cookcountyil-gov (2024). Board of Review - Tax Year Statistics [Dataset]. https://www.splitgraph.com/performance-archive-cookcountyil-gov/board-of-review-tax-year-statistics-i3ac-yhf6/
    Explore at:
    application/vnd.splitgraph.image, application/openapi+json, jsonAvailable download formats
    Dataset updated
    Aug 27, 2024
    Authors
    performance-archive-cookcountyil-gov
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  16. S

    1100B_Board of Supervisors

    • splitgraph.com
    Updated Mar 14, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    performance-smcgov (2014). 1100B_Board of Supervisors [Dataset]. https://www.splitgraph.com/performance-smcgov/1100bboard-of-supervisors-au38-fwq8/
    Explore at:
    json, application/vnd.splitgraph.image, application/openapi+jsonAvailable download formats
    Dataset updated
    Mar 14, 2014
    Authors
    performance-smcgov
    Description

    Performance measures dataset

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  17. S

    Budget Revenues for Open Budget App

    • splitgraph.com
    Updated Jun 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    roseville-ca-us (2024). Budget Revenues for Open Budget App [Dataset]. https://www.splitgraph.com/roseville-ca-us/budget-revenues-for-open-budget-app-f9pd-npvm/
    Explore at:
    application/openapi+json, json, application/vnd.splitgraph.imageAvailable download formats
    Dataset updated
    Jun 24, 2024
    Authors
    roseville-ca-us
    Description

    This dataset feeds the view that powers the Open Budget app.

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  18. S

    1200B_County Manager/Clerk of the Board

    • splitgraph.com
    Updated Oct 6, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    performance-smcgov (2015). 1200B_County Manager/Clerk of the Board [Dataset]. https://www.splitgraph.com/performance-smcgov/1200bcounty-managerclerk-of-the-board-33q2-gcck/
    Explore at:
    application/vnd.splitgraph.image, json, application/openapi+jsonAvailable download formats
    Dataset updated
    Oct 6, 2015
    Authors
    performance-smcgov
    Description

    Performance measures dataset

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  19. S

    DAILY SeeClickFix Questions

    • splitgraph.com
    Updated Aug 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    stat-stpete (2024). DAILY SeeClickFix Questions [Dataset]. https://www.splitgraph.com/stat-stpete/daily-seeclickfix-questions-hahi-d9te/
    Explore at:
    application/openapi+json, json, application/vnd.splitgraph.imageAvailable download formats
    Dataset updated
    Aug 16, 2024
    Authors
    stat-stpete
    Description

    This data set contains the questions associated with each SeeClickFix request. Each line is a question within a request. A single request may have multiple lines due to the questions asked within the request.

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  20. S

    Asset Inventory

    • splitgraph.com
    Updated Jul 23, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    uspto-data-commerce-gov (2020). Asset Inventory [Dataset]. https://www.splitgraph.com/uspto-data-commerce-gov/asset-inventory-amfr-qw3g/
    Explore at:
    application/openapi+json, json, application/vnd.splitgraph.imageAvailable download formats
    Dataset updated
    Jul 23, 2020
    Authors
    uspto-data-commerce-gov
    Description

    This dataset is a complete inventory of all assets on this site and any assets sourced from other sites, if applicable.

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Maribel Acosta; Maria-Esther Vidal; York Sure-Vetter (2023). nLDE SPARQL engine: computing diefficiency metrics based on answer traces and query processing performance benchmarking [Dataset]. http://doi.org/10.6084/m9.figshare.5255686
Organization logo

nLDE SPARQL engine: computing diefficiency metrics based on answer traces and query processing performance benchmarking

Related Article
Explore at:
txtAvailable download formats
Dataset updated
May 30, 2023
Dataset provided by
Figsharehttp://figshare.com/
Authors
Maribel Acosta; Maria-Esther Vidal; York Sure-Vetter
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset contains results of various metric tests performed in the SPARQL query engine nLDE: the network of Linked Data Eddies, in different configurations. The queries themselves are available via the nLDE website and tests are explained in depth within the associated publication.To compute the diefficiency metrics dief@t and dief@k, we need the answer trace produced by the SPARQL query engines when executing queries. Answer traces record the exact point in time when an engine produces an answer when executing a query.We executed SPARQL queries using three different configurations of the nLDE engine: Selective, NotAdaptive, Random. The resulting answer trace for each query execution is stored in the CSV file nLDEBenchmark1AnswerTrace.csv. The structure of this file is as follows: query: id of the query executed. Example: 'Q9.sparql'approach: name of the approach (or engine) used to execute the query.tuple: the value i indicates that this row corresponds to the ith answer produced by approach when executing query.time: elapsed time (in seconds) since approach started the execution of query until the answer i is produced.In addition, to compare the performance of the nLDE engine using the metrics dief@t and dief@k as well as conventional metrics used in the query processing literature, such as: execution time, time for the first tuple, and number of answers produced. We measured the performance of the nLDE engine using conventional metrics. The results are available at the CSV file inLDEBenchmark1Metrics. The structure of this CSV file is as follows:query: id of the query executed. Example: 'Q9.sparql'approach: name of the approach (or engine) used to execute the query.tfft: time (in seconds) required by approach to produce the first tuple when executing query.totaltime: elapsed time (in seconds) since approach started the execution of query until the last answer of query is produced.comp: number of answers produced by approach when executing query.

Search
Clear search
Close search
Google apps
Main menu