Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
AlphaNLI An MTEB dataset Massive Text Embedding Benchmark
Measuring the ability to retrieve the groundtruth answers to reasoning task queries on AlphaNLI.
Task category t2t
Domains Encyclopaedic, Written
Reference https://leaderboard.allenai.org/anli/submissions/get-started
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code: import mteb
task = mteb.get_task("AlphaNLI") evaluator =β¦ See the full description on the dataset page: https://huggingface.co/datasets/mteb/AlphaNLI.
https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
SIQA An MTEB dataset Massive Text Embedding Benchmark
Measuring the ability to retrieve the groundtruth answers to reasoning task queries on SIQA.
Task category t2t
Domains Encyclopaedic, Written
Reference https://leaderboard.allenai.org/socialiqa/submissions/get-started
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code: import mteb
task = mteb.get_task("SIQA") evaluator = mteb.MTEB([task])β¦ See the full description on the dataset page: https://huggingface.co/datasets/mteb/SIQA.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Leadboard
BCEmbedding: Bilingual and Crosslingual Embedding for RAG
GitHub
Click to Open Contents
π Bilingual and Crosslingual Superiority π‘ Key Features π Latest Updates π Model List π Manual Installation Quick Start
βοΈ Evaluation Evaluate Semantic Representation by MTEB Evaluate RAG by LlamaIndex
π Leaderboard Semantic Representation Evaluations in MTEB RAG Evaluations in LlamaIndex
π Youdao's BCEmbedding API π§² WeChat Group βοΈ Citation πβ¦ See the full description on the dataset page: https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
AlphaNLI An MTEB dataset Massive Text Embedding Benchmark
Measuring the ability to retrieve the groundtruth answers to reasoning task queries on AlphaNLI.
Task category t2t
Domains Encyclopaedic, Written
Reference https://leaderboard.allenai.org/anli/submissions/get-started
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code: import mteb
task = mteb.get_task("AlphaNLI") evaluator =β¦ See the full description on the dataset page: https://huggingface.co/datasets/mteb/AlphaNLI.