Facebook
TwitterThis dataset is the subset of original eli5 dataset available on hugging face
Facebook
Twitterhttps://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.
Facebook
TwitterELI5 paired
This is a processed version of the eli5 dataset. Compared to "eli5_rlhf", this dataset contains only QA pairs from the train split of the eli5 dataset and only from the subreddit explainlikeimfive. Furthermore, the function def get_question(example): title = example["title"] selftext = example["selftext"] if selftext: if selftext[-1] not in [".", "?", "!"]: seperator = ". " else: seperator = " " question = titleโฆ See the full description on the dataset page: https://huggingface.co/datasets/vincentmin/eli5_rlhf_explainlikeim5.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
ELI5 means "Explain like I am 5" . It's originally a "long and free form" Question-Answering scraping from reddit eli5 subforum. Original ELI5 datasets (https://github.com/facebookresearch/ELI5) can be used to train a model for "long & free" form Question-Answering , e.g. by Encoder-Decoder models like T5 or Bart
When we get a model, how can we estimate model performance (ability to give high-quality answers) ? Conventional methods are ROUGE-family metrics (see ELI5 paper linked above)
However, ROUGE scores are based on n-gram and and need to compare a generated answer to a ground-truth answer. Unfortunately, n-gram scoring cannot evaluate high-quality paraphrase answers.
Worse, the need to a ground-truth answer in order to compare and calculate (ROUGE) score. This scoring perspective is against the "spirit" of the "free form" question answering where there are many possible (non-paraphrase) valid and good answers .
To summarize, "creative & high-quality" answers cannot be estimated with ROUGE , which prevents us to construct (and estimate) creative models.
This dataset, in contrast, is aimed for training a "scoring" (regression) model , which can predict an upvote score on each Q-A pair individually (not A-A pair like ROUGE) .
The data is simply a CSV file containing Q-A pairs and their scores. Each line contains Q-A texts (in Roberta format) and its upvote score (non-negative integer)
It is intended to be easy and direct to create scoring model with Roberta (or other Transformer models with changing separation token) .
In the csv file, there is qa column and answer_score column
Each row in qa is written in Roberta paired-sentences format -- Answer
With answer_score we have the following principle :
- High quality answer related to its question should get high score (upvotes)
- Low quality answer related to its question should get low score
- Well written answer NOT related to its question should get 0 score
Each positive Q-A pair comes from the original ELI5 dataset (true upvote score). Each 0-score Q-A pair is constructed with details in the next subsection.
The principle is contrastive training. We need somewhat high-quality 0-score pairs for model to generalize. Too easy 0-score pairs (e.g. a question with random answers will be too easy and a model will learn nothing)
Therefore, for each question, we try to construct two answers (two 0-score pairs) where each answer is related to the topic of the question, but does not answer the question.
This can be achieve by vectorizing all questions into vectors using RetriBERT and storing with FAISS. We can then measure a distance between two question vectors using cosine distance.
More precisely, for a question Q1, we choose two answers of related (but non-identical) questions Q2 and Q3 , i.e. answer A2 and A3, to construct Q1-A2 and Q1-A3 pairs of 0-score. Combining with the Q1-A1 pair of positive score, we will have 3 Q1 pairs , and 3 pairs for each questions in total. Therefore, from 272,000 examples of original ELI5 , in this dataset we have 3 times of its size = 816,000 examples .
Note that two question vectors that are very close can be the same (paraphrase) question , and two questions that are very far apart are totally different questions. Therefore, we need a threshold to determine not-too-close & not-too-far pair of questions so that we get non-identical but same-topic question pairs. In a simple experiment, a cosine distance of 10-11 of RetriBERT vectors seem work well, so we use this number as a threshold to construct a 0-score Q-A pair.
roberta-base baseline with MAE 3.91 on validation set can be found here :
https://www.kaggle.com/ratthachat/eli5-scorer-roberta-base-500k-mae391
Facebook AI team for creating original ELI5 dataset, and Huggingface NLP library for make us access this dataset easily . - https://github.com/facebookresearch/ELI5 - https://huggingface.co/nlp/viewer/
My project on ELI5 is mainly inspired from this amazing work of Yacine Jernite : https://yjernite.github.io/lfqa.html
Facebook
TwitterReddit (Title, Body)-Pairs
This dataset contains jsonl-Files about (title, body) pairs from Reddit. Each line is a JSON object of the following format: {'title': 'The title of a thread', 'body': 'The longer body of the thread', 'subreddit': 'subreddit_name'}
The 2021 file contains submissions up until including 2021-06. Entries in the respective files are shuffled on a monthly basis. The data has been filtered for:
Remove threads with an upvote_ratio < 0.5 Only include threadsโฆ See the full description on the dataset page: https://huggingface.co/datasets/sentence-transformers/reddit-title-body.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A meta dataset of Reddit's own /r/datasets community.
Facebook
Twitterkuengroc/eli5 dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterReddit Posts about Mental Health
Facebook
Twittervibrantlabsai/ELI5 dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterP1ayer-1/eli5-questions dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterhttps://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
NomaDamas/eli5-qa dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
sutro/llm-as-a-judge-eli5 dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twittersujeongh/eli5-instruction dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twittermksethi/eli5-gemma-features dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterELI5 paired This is a processed version of the eli5 dataset. The dataset was created following very closely the steps in the stack-exchange-paired dataset. The following steps were applied:
Create pairs (response_j, response_k) where j was rated better than k Sample at most 10 pairs per question Shuffle the dataset globally
This dataset is designed to be used for preference learning using techniques such as Reinforcement Learning from Human Feedback. The processing notebook is in theโฆ See the full description on the dataset page: https://huggingface.co/datasets/vincentmin/eli5_rlhf.
Facebook
Twitterseongil-dn/mteb-eli5 dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This corpus contains preprocessed posts from the Reddit dataset. The dataset consists of 3,848,330 posts with an average length of 270 words for content, and 28 words for the summary.
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id. Content is used as document and summary is used as summary.
Facebook
Twitterpinecone/reddit-qa dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterhttps://choosealicense.com/licenses/undefined/https://choosealicense.com/licenses/undefined/
Dataset Card for "REDDIT_comments"
Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
Dataset Structure
Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview"โฆ See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceGECLM/REDDIT_comments.
Facebook
TwitterThis is the dataset processed from the data released by the FILCO paper.
Facebook
TwitterThis dataset is the subset of original eli5 dataset available on hugging face