Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Card for GPQA
GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google. We request that you do not reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation model… See the full description on the dataset page: https://huggingface.co/datasets/Idavidrein/gpqa.
GPQA stands for Graduate-Level Google-Proof Q&A Benchmark. It's a challenging dataset designed to evaluate the capabilities of Large Language Models (LLMs) and scalable oversight mechanisms. Let me provide more details about it:
Description: GPQA consists of 448 multiple-choice questions meticulously crafted by domain experts in biology, physics, and chemistry. These questions are intentionally designed to be high-quality and extremely difficult. Expert Accuracy: Even experts who hold or are pursuing PhDs in the corresponding domains achieve only 65% accuracy on these questions (or 74% when excluding clear mistakes identified in retrospect). Google-Proof: The questions are "Google-proof," meaning that even with unrestricted access to the web, highly skilled non-expert validators only reach an accuracy of 34% despite spending over 30 minutes searching for answers. AI Systems Difficulty: State-of-the-art AI systems, including our strongest GPT-4 based baseline, achieve only 39% accuracy on this challenging dataset.
The difficulty of GPQA for both skilled non-experts and cutting-edge AI systems makes it an excellent resource for conducting realistic scalable oversight experiments. These experiments aim to explore ways for human experts to reliably obtain truthful information from AI systems that surpass human capabilities¹³.
In summary, GPQA serves as a valuable benchmark for assessing the robustness and limitations of language models, especially when faced with complex and nuanced questions. Its difficulty level encourages research into effective oversight methods, bridging the gap between AI and human expertise.
(1) [2311.12022] GPQA: A Graduate-Level Google-Proof Q&A Benchmark - arXiv.org. https://arxiv.org/abs/2311.12022. (2) GPQA: A Graduate-Level Google-Proof Q&A Benchmark — Klu. https://klu.ai/glossary/gpqa-eval. (3) GPA Dataset (Spring 2010 through Spring 2020) - Data Science Discovery. https://discovery.cs.illinois.edu/dataset/gpa/. (4) GPQA: A Graduate-Level Google-Proof Q&A Benchmark - GitHub. https://github.com/idavidrein/gpqa. (5) Data Sets - OpenIntro. https://www.openintro.org/data/index.php?data=satgpa. (6) undefined. https://doi.org/10.48550/arXiv.2311.12022. (7) undefined. https://arxiv.org/abs/2311.12022%29.
Comparison of Independently conducted by Artificial Analysis by Model
fingertap/GPQA-Diamond dataset hosted on Hugging Face and contributed by the HF Datasets community
GPQA Diamond Dataset
This dataset contains filtered JSONL files of human annotations on question specificity, answer uniqueness, answer matching to the ground truth for different models for the GPQA Diamond dataset.
The dataset was annotated by two human graders. It contains 198 (original size) * 2 = 396 rows as each rows is repeated twice (one for each human). A human grader given the question, actual answer and model response, has to answer whether the response matches the… See the full description on the dataset page: https://huggingface.co/datasets/nikhilchandak/gpqa-diamond-annotations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
nikhilchandak/GPQA-diamond-free dataset hosted on Hugging Face and contributed by the HF Datasets community
Comparison of Artificial Analysis Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500 by Model
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
nikhilchandak/gpqa-diamond-test2 dataset hosted on Hugging Face and contributed by the HF Datasets community
Comparison of Artificial Analysis Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500 by Model
mkhalifa/gpqa-diamond-physics dataset hosted on Hugging Face and contributed by the HF Datasets community
Reasoning PRM Preference Dataset
This dataset contains reasoning traces from multiple sources (GPQA Diamond and MMLU Pro), labeled with preference information based on correctness verification.
Dataset Description
Overview
The dataset consists of reasoning problems and their solutions, where each example has been verified for correctness and labeled with a preference score. It combines data from two main sources:
GPQA Diamond MMLU Pro
Data Fields… See the full description on the dataset page: https://huggingface.co/datasets/ariaattarml/verified-reasoning-o1-gpqa-mmlu-pro.
Comparison of Artificial Analysis Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500 by Model
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
GPQA Diamond with Llama-3.1-70B-Instruct (up to 1K Samples)
This dataset contains 198 graduate-level science questions from the GPQA Diamond benchmark with up to 1000 candidate responses generated by Llama-3.1-70B-Instruct for each problem. Each response has been evaluated for correctness using a mixture of GPT-4o-mini and procedural Python code to robustly parse different answer formats, and scored by multiple reward models (scalar values) and LM judges (boolean verdicts). For more… See the full description on the dataset page: https://huggingface.co/datasets/hazyresearch/GPQA_Diamond_with_Llama_3.1_70B_Instruct_up_to_1K_Samples_v1.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
In view of the significant performance improvements recently demonstrated by the Qwen3 series of models, we conducted a comprehensive evaluation of their capabilities across a range of representative benchmarks. Specifically, we evaluated the Qwen3 models on AIME2024, AIME2025, and GPQA Diamond. The prompt format used in these experiments is provided in the response files; additional details regarding the prompt design will be presented at a later time. Each set of inference experiments… See the full description on the dataset page: https://huggingface.co/datasets/Xuerui2312/Qwen3-8B-Rollout64-32k-AIME2024-AIME2025-GPQA.
Freeform Datasets
This repository contains two carefully curated datasets for evaluating large language models on human-filtered subset of popular benchmarks which are suitable for evaluation in freeform (open-ended) format. These datasets were developed as part of our paper on Answer Matching Outperforms Multiple Choice for Language Model Evaluation.
Dataset Structure
The repository contains two splits:
1. gpqa_diamond Split
Source: Filtered subset of… See the full description on the dataset page: https://huggingface.co/datasets/nikhilchandak/freeform-datasets.
Comparison of Artificial Analysis Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500 by Model
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Answer Matching Dataset
This dataset contains a single split for human annotation analysis:
gpqa_diamond_annotations: Combined GPQA Diamond annotations from all annotators (Ameya + Nikhil)
All other evaluation files are available in the "Files and versions" tab, preserving the original directory structure.
Directory Structure and Data Overview
gpqa_diamond_mcq
combined_samples.jsonl samples_deepseek-r1-0528.jsonl samples_llama-4-scout.jsonl… See the full description on the dataset page: https://huggingface.co/datasets/nikhilchandak/answer-matching.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
OpenR1-Math-220k_decontaminated
Decontaminated version of open-r1/OpenR1-Math-220k - default/train
Decontamination
Removed any questions that have an 8-gram overlap with common benchmarks: AIME 2024, AIME 2025, MATH500, GPQA Diamond, LiveCodeBench Code Generation Lite Used GitHub:huggingface/open-r1/scripts/decontaminate.py with all defaults following https://github.com/huggingface/open-r1#data-decontamination
python scripts/decontaminate.py
--dataset… See the full description on the dataset page: https://huggingface.co/datasets/notpaulmartin/OpenR1-Math-220k_decontaminated.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
aradhye/gpqa_diamond dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Description:
OpenScience is a multi-domain synthetic dataset designed to improve general-purpose reasoning in large language models (LLMs). The dataset contains multiple-choice question-answer pairs with detailed reasoning traces and spans across diverse scientific domains, including STEM, law, economics, and humanities. OpenScience aims to boost accuracy on advanced benchmarks such as GPQA-Diamond and MMLU-Pro via supervised finetuning or reinforcement learning. This… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenScience.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Card for GPQA
GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google. We request that you do not reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation model… See the full description on the dataset page: https://huggingface.co/datasets/Idavidrein/gpqa.