This is a test collection for passage and document retrieval, produced in the TREC 2023 Deep Learning track. The Deep Learning Track studies information retrieval in a large training data regime. This is the case where the number of training queries with at least one positive label is at least in the tens of thousands, if not hundreds of thousands or more. This corresponds to real-world scenarios such as training based on click logs and training based on labels from shallow pools (such as the pooling in the TREC Million Query Track or the evaluation of search engines based on early precision).Certain machine learning based methods, such as methods based on deep learning are known to require very large datasets for training. Lack of such large scale datasets has been a limitation for developing such methods for common information retrieval tasks, such as document ranking. The Deep Learning Track organized in the previous years aimed at providing large scale datasets to TREC, and create a focused research effort with a rigorous blind evaluation of ranker for the passage ranking and document ranking tasks.Similar to the previous years, one of the main goals of the track in 2022 is to study what methods work best when a large amount of training data is available. For example, do the same methods that work on small data also work on large data? How much do methods improve when given more training data? What external data and models can be brought in to bear in this scenario, and how useful is it to combine full supervision with other forms of supervision?The collection contains 12 million web pages, 138 million passages from those web pages, search queries, and relevance judgments for the queries.
In this paper, we propose a weak supervision framework for neural ranking tasks based on the data programming paradigm (Ratner et al., 2016), which enables us to leverage multiple weak supervision signals from different sources.
https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
T2Reranking An MTEB dataset Massive Text Embedding Benchmark
T2Ranking: A large-scale Chinese Benchmark for Passage Ranking
Task category t2t
Domains None
Reference https://arxiv.org/abs/2304.03679
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code: import mteb
task = mteb.get_tasks(tasks=["T2Reranking"]) evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL) evaluator.run(model)
To… See the full description on the dataset page: https://huggingface.co/datasets/mteb/T2Reranking.
MMarcoReranking An MTEB dataset Massive Text Embedding Benchmark
mMARCO is a multilingual version of the MS MARCO passage ranking dataset
Task category t2t
Domains None
Reference https://github.com/unicamp-dl/mMARCO
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code: import mteb
task = mteb.get_tasks(["MMarcoReranking"]) evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)… See the full description on the dataset page: https://huggingface.co/datasets/mteb/MMarcoReranking.
Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
Supported Tasks and Leaderboards
This task is evaluated as a ranking task. To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that… See the full description on the dataset page: https://huggingface.co/datasets/Salama1429/tarteel-ai-QuranQA.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Arabic Mr. TyDi in Triplet Format
Dataset Summary
This dataset is a transformed version of the Arabic subset of the Mr. TyDi dataset, designed specifically for training retrieval and re-ranking models. Each query is paired with a positive passage and one of the multiple negative passages in a triplet format: (query, positive, negative). This restructuring resulted in a total of 362,000 rows, making it ideal for pairwise ranking tasks and contrastive learning approaches.… See the full description on the dataset page: https://huggingface.co/datasets/NAMAA-Space/Ara-TyDi-Triplet.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
This is a test collection for passage and document retrieval, produced in the TREC 2023 Deep Learning track. The Deep Learning Track studies information retrieval in a large training data regime. This is the case where the number of training queries with at least one positive label is at least in the tens of thousands, if not hundreds of thousands or more. This corresponds to real-world scenarios such as training based on click logs and training based on labels from shallow pools (such as the pooling in the TREC Million Query Track or the evaluation of search engines based on early precision).Certain machine learning based methods, such as methods based on deep learning are known to require very large datasets for training. Lack of such large scale datasets has been a limitation for developing such methods for common information retrieval tasks, such as document ranking. The Deep Learning Track organized in the previous years aimed at providing large scale datasets to TREC, and create a focused research effort with a rigorous blind evaluation of ranker for the passage ranking and document ranking tasks.Similar to the previous years, one of the main goals of the track in 2022 is to study what methods work best when a large amount of training data is available. For example, do the same methods that work on small data also work on large data? How much do methods improve when given more training data? What external data and models can be brought in to bear in this scenario, and how useful is it to combine full supervision with other forms of supervision?The collection contains 12 million web pages, 138 million passages from those web pages, search queries, and relevance judgments for the queries.