71 datasets found
  1. h

    rag_instruct_benchmark_tester

    • huggingface.co
    • opendatalab.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    llmware, rag_instruct_benchmark_tester [Dataset]. https://huggingface.co/datasets/llmware/rag_instruct_benchmark_tester
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset authored and provided by
    llmware
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Card for RAG-Instruct-Benchmark-Tester

      Dataset Summary
    

    This is an updated benchmarking test dataset for "retrieval augmented generation" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases, contracts, invoices, technical articles, general news and short texts.
    The questions are segmented… See the full description on the dataset page: https://huggingface.co/datasets/llmware/rag_instruct_benchmark_tester.

  2. o

    SciRAG-QA: Multi-domain Closed-Question Benchmark Dataset for Scientific QA

    • explore.openaire.eu
    • zenodo.org
    Updated Dec 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mahira Ibnath Joytu; Md Raisul Kibria; Sébastien Lafond (2024). SciRAG-QA: Multi-domain Closed-Question Benchmark Dataset for Scientific QA [Dataset]. http://doi.org/10.5281/zenodo.14390011
    Explore at:
    Dataset updated
    Dec 11, 2024
    Authors
    Mahira Ibnath Joytu; Md Raisul Kibria; Sébastien Lafond
    Description

    In recent times, one of the most impactful applications of the growing capabilities of Large Language Models (LLMs) has been their use in Retrieval-Augmented Generation (RAG) systems. RAG applications are inherently more robust against LLM hallucinations and provide source traceability, which holds critical importance in the scientific reading and writing process. However, validating such systems is essential due to the stringent systematic requirements of the scientific domain. Existing benchmark datasets are limited in the scope of research areas they cover, often focusing on the natural sciences, which restricts their applicability and validation across other scientific fields. To address this gap, we present a closed-question answering (QA) dataset for benchmarking scientific RAG applications. This dataset spans 34 research topics across 10 distinct areas of study. It includes 108 manually curated question-answer pairs, each annotated with answer type, difficulty level, and a gold reference along with a link to the source paper. Further details on each of these attributes can be found in the accompanying README.md file. Please cite the following publication when using the dataset: TBD The publication is available at: TBD A preprint version of the publication is available at: TBD

  3. h

    German-RAG-LLM-EASY-BENCHMARK

    • huggingface.co
    Updated Feb 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Avemio AG (2025). German-RAG-LLM-EASY-BENCHMARK [Dataset]. https://huggingface.co/datasets/avemio/German-RAG-LLM-EASY-BENCHMARK
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 6, 2025
    Dataset authored and provided by
    Avemio AG
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    German-RAG-LLM-EASY-BENCHMARK

      German-RAG - German Retrieval Augmented Generation
    
    
    
    
    
      Dataset Summary
    

    This German-RAG-LLM-BENCHMARK represents a specialized collection for evaluating language models with a focus on source citation, time difference stating in RAG-specific tasks. To evaluate models compatible with OpenAI-Endpoints you can refer to our Github Repo: https://github.com/avemio-digital/German-RAG-LLM-EASY-BENCHMARK/ Most of the Subsets are synthetically… See the full description on the dataset page: https://huggingface.co/datasets/avemio/German-RAG-LLM-EASY-BENCHMARK.

  4. h

    nexa-rag-benchmark

    • huggingface.co
    Updated Dec 3, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    zhanx (2014). nexa-rag-benchmark [Dataset]. https://huggingface.co/datasets/zhanxxx/nexa-rag-benchmark
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 3, 2014
    Authors
    zhanx
    Description

    nexa-rag-benchmark

    The Nexa RAG Benchmark dataset is designed for evaluating Retrieval-Augmented Generation (RAG) models across multiple question-answering benchmarks. It includes a variety of datasets covering different domains. For evaluation, you can use the repository:🔗 Nexa RAG Benchmark on GitHub

      Dataset Structure
    

    This benchmark integrates multiple datasets suitable for RAG performance. You can choose datasets based on context size, number of examples, or… See the full description on the dataset page: https://huggingface.co/datasets/zhanxxx/nexa-rag-benchmark.

  5. h

    RAG-RewardBench

    • huggingface.co
    Updated Dec 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhuoran Jin (2024). RAG-RewardBench [Dataset]. https://huggingface.co/datasets/jinzhuoran/RAG-RewardBench
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 18, 2024
    Authors
    Zhuoran Jin
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This repository contains the data presented in RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment. Code: https://github.com/jinzhuoran/RAG-RewardBench/

  6. h

    rag-benchmark-dataset

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Onepane.ai, rag-benchmark-dataset [Dataset]. https://huggingface.co/datasets/onepaneai/rag-benchmark-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset provided by
    Onepane.ai
    Description

    onepaneai/rag-benchmark-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community

  7. h

    silma-rag-qa-benchmark-v1.0

    • huggingface.co
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SILMA AI - Arabic Language Models (2025). silma-rag-qa-benchmark-v1.0 [Dataset]. https://huggingface.co/datasets/silma-ai/silma-rag-qa-benchmark-v1.0
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 13, 2025
    Dataset authored and provided by
    SILMA AI - Arabic Language Models
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    SILMA RAGQA Benchmark Dataset V1.0

    SILMA RAGQA is a dataset and benchmark created by silma.ai to assess the effectiveness of Arabic Language Models in Extractive Question Answering tasks, with a specific emphasis on RAG applications The benchmark includes 17 bilingual datasets in Arabic and English, spanning various domains

      What capabilities does the benchmark test?
    

    General Arabic and English QA capabilities Ability to handle short and long contexts Ability to… See the full description on the dataset page: https://huggingface.co/datasets/silma-ai/silma-rag-qa-benchmark-v1.0.

  8. D

    Replication Data for: Advanced System Integration: Analyzing OpenAPI...

    • darus.uni-stuttgart.de
    Updated Dec 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robin D. Pesl; Jerin George Mathew; Massimo Mecella; Marco Aiello (2024). Replication Data for: Advanced System Integration: Analyzing OpenAPI Chunking for Retrieval-Augmented Generation [Dataset]. http://doi.org/10.18419/DARUS-4605
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 9, 2024
    Dataset provided by
    DaRUS
    Authors
    Robin D. Pesl; Jerin George Mathew; Massimo Mecella; Marco Aiello
    License

    https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4605https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4605

    Dataset funded by
    BMWK
    MWK
    Description

    Integrating multiple (sub-)systems is essential to create advanced Information Systems (ISs). Difficulties mainly arise when integrating dynamic environments across the IS lifecycle, e.g., services not yet existent at design time. A traditional approach is a registry that provides the API documentation of the systems’ endpoints. Large Language Models (LLMs) have shown to be capable of automatically creating system integrations (e.g., as service composition) based on this documentation but require concise input due to input token limitations, especially regarding comprehensive API descriptions. Currently, it is unknown how best to preprocess these API descriptions. Within this work, we (i) analyze the usage of Retrieval Augmented Generation (RAG) for endpoint discovery and the chunking, i.e., preprocessing, of state-of-practice OpenAPIs to reduce the input token length while preserving the most relevant information. To further reduce the input token length for the composition prompt and improve endpoint retrieval, we propose (ii) a Discovery Agent that only receives a summary of the most relevant endpoints and retrieves specification details on demand. We evaluate RAG for endpoint discovery using the RestBench benchmark, first, for the different chunking possibilities and parameters measuring the endpoint retrieval recall, precision, and F1 score. Then, we assess the Discovery Agent using the same test set. With our prototype, we demonstrate how to successfully employ RAG for endpoint discovery to reduce token count. While revealing high values for recall, precision, and F1, further research is necessary to retrieve all requisite endpoints. Our experiments show that for preprocessing, LLM-based and format-specific approaches outperform naïve chunking methods. Relying on an agent further enhances these results as the agent splits the tasks into multiple fine granular subtasks, improving the overall RAG performance in the token count, precision, and F1 score. Content: code.zip:Python source code to perform the experiments. evaluate.py: Script to execute the experiments (Uncomment lines to select the embedding model). socrag/*: Source code for the RAG. benchmark/*: RestBench specification. results.zip:Results of the RAG experiments (in the folder /results/data/ inside the zip file). Experiment results for the RAG: results_{embedding_model}_{top-k}.json. Experiment results for the Discovery Agent: results_{embedding_model}_{agent}_{refinement}_{llm}.json. FAISS store (intermediate data required for exact reproduction of results; one folder for each embedding model): bge_small, nvidia and oai. Intermediate data of the LLM-based refinement methods required for the exact reproduction of results: *_parser.json.

  9. WixQA

    • huggingface.co
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wix (2024). WixQA [Dataset]. https://huggingface.co/datasets/Wix/WixQA
    Explore at:
    Dataset updated
    Dec 2, 2024
    Dataset provided by
    Wix.comhttp://wix.com/
    Authors
    Wix
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    WixQA: Enterprise RAG Question-Answering Benchmark

    📄 Full Paper Available: For comprehensive details on dataset design, methodology, evaluation results, and analysis, please see our complete research paper: WixQA: A Multi-Dataset Benchmark for Enterprise Retrieval-Augmented Generation Cohen et al. (2025) - arXiv:2505.08643

      Dataset Summary
    

    WixQA is a three-config collection for evaluating and training Retrieval-Augmented Generation (RAG) systems in enterprise… See the full description on the dataset page: https://huggingface.co/datasets/Wix/WixQA.

  10. f

    350M Model

    • figshare.com
    json
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pavel Chizhov (2025). 350M Model [Dataset]. http://doi.org/10.6084/m9.figshare.29135096.v1
    Explore at:
    jsonAvailable download formats
    Dataset updated
    May 23, 2025
    Dataset provided by
    figshare
    Authors
    Pavel Chizhov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    350M Model**RAG-350M** is a 350 million parameters Small Reasoning Model, trained for retrieval-augmented general (RAG), search and source summarization. Along with RAG-1B it belongs to our family of specialized reasoning models.RAG-350M outperforms most SLMs (4 billion parameters and below) on standardized benchmarks for retrieval-augmented general (HotPotQA, 2wiki) and is a highly cost-effective alternative with popular larger models, including Qwen-2.5-7B, Llama-3.1-8B and Gemma-3-4B. It is the only SLM to date to maintain consistent RAG performance across leading European languages and to ensure systematic reference grounding for statements. Due to its size, ease of deployment on constrained infrastructure (including mobile phone) and built-in support for factual and accurate information, RAG-350m unlocks a range of new use cases for generative AI.## FeaturesRAG-350M is a specialized language model using a series of special tokens to process a structured input (query and sources) and generate a structured output (reasoning sequence and answer with sources). For easier implementation, we encourage to use the associated API library.### Citation supportRAG-350M natively generated grounded answers on the basis of excerpts and citations extracted from the provided sources, using a custom syntax inspired by Wikipedia. It is one a handful open weights model to date to have been developed with this feature and the first one designed for actual deployment. In contrast with Anthropic approach (Citation mode), citation are integrally generated by the model and are not the product of external chunking. As a result we can provide another desirable feature to simplify source checking: citation shortening for longer excerpts (using "(…)").### RAG reasoningRAG-350M generates a specific reasoning sequences incorporating several proto-agentic abilities for RAG applications. The model is able to make a series of decisions directly:* Assessing whether the query is understandable.* Assessing whether the query is trivial enough to not require a lengthy pre-analysis (adjustable reasoning)* Assessing whether the sources do contain enough input to generate a grounded answer.The structured reasoning trace include the following steps:* Language detection of the query. The model will always strive to answer in the language of the original query.* Query analysis and associated query report. The analysis can either lead to a standard answer, a shortening reasoning trace/answer for trivial question, a reformulated query or a refusal (that could in the context of the application be transformed into user input querying).* Source analysis and associated source report. This step evaluates the coverage and depth of the provided sources in regards to the query.* Draft of the final answer.### MultilingualityRAG-350M is able to read and write in the main European languages: French, German, Italian, Spanish and, to a lesser extent, Polish, Latin and Portuguese.To date, it is the only small language model with negligible loss of performance in leading European languages for RAG-related tasks. On a translated set of HotPotQA we observed a significant drop of performance in most SLMs from 10\% to 30-35\% for sub-1B models. We do expect the results of any standard English evaluation on our RAG models should be largely transferable to the main European languages limiting the costs of evaluation and deployment in multilingual settings.## TrainingRAG-350M is trained on large synthetic dataset emulating retrieval of wide variety of multilingual open sources from Common Corpus. They provide native support for citation and grounding with literal quotes. Following on the latest trends of agentification, the models reintegrate multiple features associated with RAG workflows such as query routing, query reformulation, source reranking.## EvaluationRAG-350M was evaluated on three standard RAG benchmarks, 2wiki, HotpotQA and MuSique.All the benchmarks only assess the "trivial" mode on questions requiring some form of multi-hop reasoning over sources (answer disseminated into different sources) as well as discrimination of distractor sources.RAG-350M is not simply a cost-effective version of larger models. We found it has been able to answer correctly to several hundred questions from HotPotQA that neither Llama-3-8b nor Qwen-2.5-7b could solve. Consequently we encourage its use as part of multi-model RAG systems.

  11. REAL-MM-RAG_FinSlides

    • huggingface.co
    Updated Mar 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IBM Research (2025). REAL-MM-RAG_FinSlides [Dataset]. https://huggingface.co/datasets/ibm-research/REAL-MM-RAG_FinSlides
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 13, 2025
    Dataset provided by
    IBMhttp://ibm.com/
    IBM Research
    Authors
    IBM Research
    License

    https://choosealicense.com/licenses/cdla-permissive-2.0/https://choosealicense.com/licenses/cdla-permissive-2.0/

    Description

    REAL-MM-RAG-Bench: A Real-World Multi-Modal Retrieval Benchmark

    We introduced REAL-MM-RAG-Bench, a real-world multi-modal retrieval benchmark designed to evaluate retrieval models in reliable, challenging, and realistic settings. The benchmark was constructed using an automated pipeline, where queries were generated by a vision-language model (VLM), filtered by a large language model (LLM), and rephrased by an LLM to ensure high-quality retrieval evaluation. To simulate real-world… See the full description on the dataset page: https://huggingface.co/datasets/ibm-research/REAL-MM-RAG_FinSlides.

  12. P

    BEIR Dataset

    • paperswithcode.com
    • library.toponeai.link
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nandan Thakur; Nils Reimers; Andreas Rücklé; Abhishek Srivastava; Iryna Gurevych, BEIR Dataset [Dataset]. https://paperswithcode.com/dataset/beir
    Explore at:
    Authors
    Nandan Thakur; Nils Reimers; Andreas Rücklé; Abhishek Srivastava; Iryna Gurevych
    Description

    BEIR (Benchmarking IR) is a heterogeneous benchmark containing different information retrieval (IR) tasks. Through BEIR, it is possible to systematically study the zero-shot generalization capabilities of multiple neural retrieval approaches.

    The benchmark contains a total of 9 information retrieval tasks (Fact Checking, Citation Prediction, Duplicate Question Retrieval, Argument Retrieval, News Retrieval, Question Answering, Tweet Retrieval, Biomedical IR, Entity Retrieval) from 19 different datasets:

    MS MARCO TREC-COVID NFCorpus BioASQ Natural Questions HotpotQA FiQA-2018 Signal-1M TREC-News ArguAna Touche 2020 CQADupStack Quora Question Pairs DBPedia SciDocs FEVER Climate-FEVER SciFact Robust04

  13. f

    Supplementary file 1_Swedish Medical LLM Benchmark: development and...

    • frontiersin.figshare.com
    pdf
    Updated Jul 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Birger Moëll; Fabian Farestam; Jonas Beskow (2025). Supplementary file 1_Swedish Medical LLM Benchmark: development and evaluation of a framework for assessing large language models in the Swedish medical domain.pdf [Dataset]. http://doi.org/10.3389/frai.2025.1557920.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    Frontiers
    Authors
    Birger Moëll; Fabian Farestam; Jonas Beskow
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionWe present the Swedish Medical LLM Benchmark (SMLB), an evaluation framework for assessing large language models (LLMs) in the Swedish medical domain.MethodThe SMLB addresses the lack of language-specific, clinically relevant benchmarks by incorporating four datasets: translated PubMedQA questions, Swedish Medical Exams, Emergency Medicine scenarios, and General Medicine cases.ResultOur evaluation of 18 state-of-the-art LLMs reveals GPT-4-turbo, Claude- 3.5 (October 2023), and the o3model as top performers, demonstrating a strong alignment between medical reasoning and general language understanding capabilities. Hybrid systems incorporating retrieval-augmented generation (RAG) improved accuracy for clinical knowledge questions, highlighting promising directions for safe implementation.DiscussionThe SMLB provides not only an evaluation tool but also reveals fundamental insights about LLM capabilities and limitations in Swedish healthcare applications, including significant performance variations between models. By open-sourcing the benchmark, we enable transparent assessment of medical LLMs while promoting responsible development through community-driven refinement. This study emphasizes the critical need for rigorous evaluation frameworks as LLMs become increasingly integrated into clinical workflows, particularly in non-English medical contexts where linguistic and cultural specificity are paramount.

  14. frames-benchmark

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google, frames-benchmark [Dataset]. https://huggingface.co/datasets/google/frames-benchmark
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset authored and provided by
    Googlehttp://google.com/
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    FRAMES: Factuality, Retrieval, And reasoning MEasurement Set

    FRAMES is a comprehensive evaluation dataset designed to test the capabilities of Retrieval-Augmented Generation (RAG) systems across factuality, retrieval accuracy, and reasoning. Our paper with details and experiments is available on arXiv: https://arxiv.org/abs/2409.12941.

      Dataset Overview
    

    824 challenging multi-hop questions requiring information from 2-15 Wikipedia articles Questions span diverse topics… See the full description on the dataset page: https://huggingface.co/datasets/google/frames-benchmark.

  15. f

    This file provides the evaluation metrics used to assess the performance of...

    • figshare.com
    xlsx
    Updated Jun 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lameck Mbangula Amugongo; Pietro Mascheroni; Steven Brooks; Stefan Doering; Jan Seidel (2025). This file provides the evaluation metrics used to assess the performance of RAG pipelines in the various papers. [Dataset]. http://doi.org/10.1371/journal.pdig.0000877.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 11, 2025
    Dataset provided by
    PLOS Digital Health
    Authors
    Lameck Mbangula Amugongo; Pietro Mascheroni; Steven Brooks; Stefan Doering; Jan Seidel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This file provides the evaluation metrics used to assess the performance of RAG pipelines in the various papers.

  16. g

    The BABILong Benchmark

    • github.com
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). The BABILong Benchmark [Dataset]. https://github.com/booydar/babilong
    Explore at:
    Dataset updated
    Apr 15, 2025
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This repository contains code and instructions for BABILong benchmark. The BABILong benchmark is designed to test language models' ability to reason across facts distributed in extremely long documents. BABILong includes a diverse set of 20 reasoning tasks, including fact chaining, simple induction, deduction, counting, and handling lists/sets. BABILong uses tasks with facts and questions from bAbI. PG-19 books are used as source of long natural contexts.

  17. Spreadsheet Manipulation using Large Language Models

    • figshare.com
    zip
    Updated Jul 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amila Indika (2025). Spreadsheet Manipulation using Large Language Models [Dataset]. http://doi.org/10.6084/m9.figshare.29602751.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 19, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Amila Indika
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Spreadsheet manipulation code to text summary dataset descriptionThe benchmark dataset comprises 111 instances of spreadsheet manipulation tasks, each accompanied by xwAPI code and corresponding subtasks in natural language.The YAML file (.yaml) within each directory contains xwAPI code ("refined response") and its corresponding natural language summary of subtasks ("intermediate response").

  18. h

    open_ragbench

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vectara, open_ragbench [Dataset]. https://huggingface.co/datasets/vectara/open_ragbench
    Explore at:
    Dataset authored and provided by
    Vectara
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Open RAG Benchmark

    The Open RAG Benchmark is a unique, high-quality Retrieval-Augmented Generation (RAG) dataset constructed directly from arXiv PDF documents, specifically designed for evaluating RAG systems with a focus on multimodal PDF understanding. Unlike other datasets, Open RAG Benchmark emphasizes pure PDF content, meticulously extracting and generating queries on diverse modalities including text, tables, and images, even when they are intricately interwoven within a… See the full description on the dataset page: https://huggingface.co/datasets/vectara/open_ragbench.

  19. e

    Police Disclosure Unit performance

    • data.europa.eu
    • data.wu.ac.at
    html, ods
    Updated Nov 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Disclosure and Barring Service (2021). Police Disclosure Unit performance [Dataset]. https://data.europa.eu/data/datasets/police-disclosure-unit-performance?locale=lv
    Explore at:
    html, odsAvailable download formats
    Dataset updated
    Nov 8, 2021
    Dataset authored and provided by
    The Disclosure and Barring Service
    License

    http://reference.data.gov.uk/id/open-government-licencehttp://reference.data.gov.uk/id/open-government-licence

    Description

    Police performance in relation to DBS check applications each month for financial year 2015-16. It shows each police units Red, Amber and Green (RAG) status and the associated calculation used to monitor performance against monthly service level agreement targets.

  20. C

    Clean Room Rag Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Apr 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Clean Room Rag Report [Dataset]. https://www.marketreportanalytics.com/reports/clean-room-rag-71782
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Apr 9, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global cleanroom rag market is experiencing robust growth, driven by the increasing demand for contamination control across various industries. The market, estimated at $1.5 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 6% from 2025 to 2033, reaching an estimated value of $2.5 billion by 2033. This growth is fueled by several key factors. The burgeoning semiconductor industry, with its stringent cleanliness requirements, is a significant driver, alongside the expanding medical device and pharmaceutical sectors, which rely heavily on cleanroom environments for manufacturing and research. Furthermore, the rising adoption of cleanroom practices in the photovoltaic industry, driven by the global push for renewable energy, contributes significantly to market expansion. Polyester rags hold the largest market share among types due to their cost-effectiveness and durability, but the demand for higher-performance materials like Nylon is steadily growing, particularly in applications requiring superior absorbency and chemical resistance. Market segmentation by application reveals a significant share held by the semiconductor industry, followed closely by the medical and photovoltaic sectors. Geographic analysis indicates strong growth potential in Asia Pacific, particularly China and India, driven by increasing industrialization and manufacturing activities. However, factors such as stringent regulatory compliance costs and the availability of alternative cleaning methods pose challenges to the market's continued expansion. Nevertheless, the overall outlook remains positive, with significant growth expected across all segments and regions in the coming years. The competitive landscape is moderately concentrated, with key players including Kimberly Clark, Texwipe, and Berkshire Corporation, alongside several regional and specialized manufacturers. These companies are constantly innovating to improve product performance, expand their offerings to cater to specific application requirements, and enhance their supply chain capabilities to meet the growing demand. Strategies such as mergers and acquisitions, partnerships, and the development of novel materials with enhanced properties are key elements of competition in this market. Further growth opportunities are anticipated through the adoption of sustainable and eco-friendly materials, reducing the environmental impact of disposable rags, and focusing on innovative packaging solutions to improve hygiene and storage capabilities. The market is expected to see a shift towards more specialized, high-performance cleanroom rags tailored to specific applications in different industries.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
llmware, rag_instruct_benchmark_tester [Dataset]. https://huggingface.co/datasets/llmware/rag_instruct_benchmark_tester

rag_instruct_benchmark_tester

llmware/rag_instruct_benchmark_tester

RAG Instruct Benchmarking Test Dataset

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset authored and provided by
llmware
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Description

Dataset Card for RAG-Instruct-Benchmark-Tester

  Dataset Summary

This is an updated benchmarking test dataset for "retrieval augmented generation" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases, contracts, invoices, technical articles, general news and short texts.
The questions are segmented… See the full description on the dataset page: https://huggingface.co/datasets/llmware/rag_instruct_benchmark_tester.

Search
Clear search
Close search
Google apps
Main menu