12 datasets found
  1. VizWiz

    • kaggle.com
    zip
    Updated Nov 9, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Danielh Carranza (2018). VizWiz [Dataset]. https://www.kaggle.com/ingbiodanielh/vizwiz
    Explore at:
    zip(30808514012 bytes)Available download formats
    Dataset updated
    Nov 9, 2018
    Authors
    Danielh Carranza
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    We propose an artificial intelligence challenge to design algorithms that assist people who are blind to overcome their daily visual challenges. For this purpose, we introduce the VizWiz dataset, which originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. Our proposed challenge addresses the following two tasks for this dataset: (1) predict the answer to a visual question and (2) predict whether a visual question cannot be answered. Ultimately, we hope this work will educate more people about the technological needs of blind people while providing an exciting new opportunity for researchers to develop assistive technologies that eliminate accessibility barriers for blind people. http://vizwiz.org/pics/vqa-examples.jpg" alt="vizwiz">

    Content

    Visual questions are split into three JSON files: train, validation, and test. Answers are publicly shared for the train and validation splits and hidden for the test split. APIs are provided to demonstrate how to parse the JSON files and evaluate methods against the ground truth.

    • 20,000 training image/question pairs
    • 200,000 training answer/answer confidence pairs
    • 3,173 image/question pairs
    • 31,730 validation answer/answer confidence pairs
    • 8,000 image/question pairs
    • Python API to read and visualize the VizWiz dataset
    • Python challenge evaluation code

    Acknowledgements

    This dataset is from the challenge VizWiz Challenge

    Inspiration

    I bring this dataset to kaggle because i want to help the blind people and to do it I need help and a lot of people

  2. PAN22 Authorship Analysis: Style Change Detection

    • zenodo.org
    zip
    Updated Dec 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Michael Tschuggnall (2023). PAN22 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.6334245
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 6, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Michael Tschuggnall
    Description

    This is the dataset for the Style Change Detection task of PAN 2022.

    Task

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Hence, a fundamental question is the following: If multiple authors have written a text together, can we find evidence for this fact; i.e., do we have a means to detect variations in the writing style? Answering this question belongs to the most difficult and most interesting challenges in author identification: Style change detection is the only means to detect plagiarism in a document if no comparison texts are given; likewise, style change detection can help to uncover gift authorships, to verify a claimed authorship, or to develop new technology for writing support.

    Previous editions of the Style Change Detection task aim at e.g., detecting whether a document is single- or multi-authored (2018), the actual number of authors within a document (2019), whether there was a style change between two consecutive paragraphs (2020, 2021) and where the actual style changes were located (2021). Based on the progress made towards this goal in previous years, we again extend the set of challenges to likewise entice novices and experts:

    Given a document, we ask participants to solve the following three tasks:

    • [Task1] Style Change Basic: for a text written by two authors that contains a single style change only, find the position of this change (i.e., cut the text into the two authors’ texts on the paragraph-level),
    • [Task2] Style Change Advanced: for a text written by two or more authors, find all positions of writing style change (i.e., assign all paragraphs of the text uniquely to some author out of the number of authors assumed for the multi-author document)
    • [Task3] Style Change Real-World: for a text written by two or more authors, find all positions of writing style change, where style changes now not only occur between paragraphs, but at the sentence level.

    All documents are provided in English and may contain an arbitrary number of style changes, resulting from at most five different authors.

    Data

    To develop and then test your algorithms, three datasets including ground truth information are provided (dataset1 for task 1, dataset2 for task 2, and dataset3 for task 3).

    Each dataset is split into three parts:

    1. training set: Contains 70% of the whole dataset and includes ground truth data. Use this set to develop and train your models.
    2. validation set: Contains 15% of the whole dataset and includes ground truth data. Use this set to evaluate and optimize your models.
    3. test set: Contains 15% of the whole dataset, no ground truth data is given. This set is used for evaluation (see later).

    You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.

    Input Format

    The datasets are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem. We provide one folder for train, validation, and test data for each dataset, respectively.

    For each problem instance X (i.e., each input document), two files are provided:

    1. problem-X.txt contains the actual text, where paragraphs are denoted by for tasks 1 and 2. For task 3, we provide one sentence per paragraph (again, split by ).
    2. truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format. An example file is listed in the following (note that we list keys for the three tasks here):
      {
      "authors": NUMBER_OF_AUTHORS,
      "site": SOURCE_SITE,
      "changes": RESULT_ARRAY_TASK1 or RESULT_ARRAY_TASK3,
      "paragraph-authors": RESULT_ARRAY_TASK2
      }

      The result for task 1 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). For task 2 (key "paragraph-authors"), the result is the order of authors contained in the document (e.g., [1, 2, 1] for a two-author document), where the first author is "1", the second author appearing in the document is referred to as "2", etc. Furthermore, we provide the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic). The result for task 3 (key "changes") is similarly structured as the results array for task 1. However, for task 3, the changes array holds a binary for each pair of consecutive sentences and they may be multiple style changes in the document.

      An example of a multi-author document with a style change between the third and fourth paragraph (or sentence for task 3) could be described as follows (we only list the relevant key/value pairs here):

      {
      "changes": [0,0,1,...],
      "paragraph-authors": [1,1,1,2,...]
      }

    Output Format

    To evaluate the solutions for the tasks, the results have to be stored in a single file for each of the input documents and each of the datasets. Please note that we require a solution file to be generated for each input problem for each dataset. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.

    For each given problem problem-X.txt, your software should output the missing solution file solution-problem-X.json, containing a JSON object holding the solution to the respective task. The solution for tasks 1 and 3 is an array containing a binary value for each pair of consecutive paragraphs (task 1) or sentences (task 3). For task 2, the solution is an array containing the order of authors contained in the document (as in the truth files).

    An example solution file for tasks 1 and 3 is featured in the following (note again that for task 1, changes are captured on the paragraph level, whereas for task 3, changes are captured on the sentence level):

    {
    "changes": [0,0,1,0,0,...]
    }

    For task 2, the solution file looks as follows:

    {
    "paragraph-authors": [1,1,2,2,3,2,...]
    }

  3. h

    brumo_2025

    • huggingface.co
    Updated May 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MathArena (2025). brumo_2025 [Dataset]. https://huggingface.co/datasets/MathArena/brumo_2025
    Explore at:
    Dataset updated
    May 13, 2025
    Dataset authored and provided by
    MathArena
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Homepage and repository

    Homepage: https://matharena.ai/ Repository: https://github.com/eth-sri/matharena

      Dataset Summary
    

    This dataset contains the questions from BRUMO 2025 used for the MathArena Leaderboard

      Data Fields
    

    Below one can find the description of each field in the dataset.

    problem_idx (int): Index of the problem in the competition problem (str): Full problem statement answer (str): Ground-truth answer to the question

      Source Data
    

    The… See the full description on the dataset page: https://huggingface.co/datasets/MathArena/brumo_2025.

  4. P

    ToT Dataset

    • paperswithcode.com
    Updated May 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bahare Fatemi; Mehran Kazemi; Anton Tsitsulin; Karishma Malkan; Jinyeong Yim; John Palowitch; Sungyong Seo; Jonathan Halcrow; Bryan Perozzi (2025). ToT Dataset [Dataset]. https://paperswithcode.com/dataset/tot
    Explore at:
    Dataset updated
    May 25, 2025
    Authors
    Bahare Fatemi; Mehran Kazemi; Anton Tsitsulin; Karishma Malkan; Jinyeong Yim; John Palowitch; Sungyong Seo; Jonathan Halcrow; Bryan Perozzi
    Description

    ToT is a benchmark for evaluating LLMs on temporal reasoning.

    ToT is a dataset designed to assess the temporal reasoning capabilities of AI models. It comprises two key sections:

    ToT-semantic: Measuring the semantics and logic of time understanding. ToT-arithmetic: Measuring the ability to carry out time arithmetic operations.

    Data Format The ToT-semantic and ToT-semantic-large datasets contain the following fields:

    question: Contains the text of the question. graph_gen_algorithm: Contains the name of the graph generator algorithm used to generate the graph. question_type: Corresponds to one of the 7 question types in the dataset. sorting_type: Correspons to the sorting type applied on the facts to order them. prompt: Contains the full prompt text used to evaluate LLMs on the task. label: Contains the ground truth answer to the question. The ToT-arithmetic dataset contains the following fields:

    question: Contains the text of the question. question_type: Corresponds to one of the 7 question types in the dataset. label: Contains the ground truth answer to the question.

  5. h

    HMMT_2025

    • huggingface.co
    Updated May 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FlagEval (2025). HMMT_2025 [Dataset]. https://huggingface.co/datasets/FlagEval/HMMT_2025
    Explore at:
    Dataset updated
    May 11, 2025
    Dataset authored and provided by
    FlagEval
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Dataset Summary

    This dataset comprises the questions, answers, and solutions from HMMT February 2025, all of which were extracted by OCR, converted to LaTeX, and manually verified by FlagEval Team.

      Data Fields
    

    Below one can find the description of each field in the dataset.

    id (str): Index of the problem in the competition problem (str): Full problem statement answer (str): Ground-truth answer to the question solution(str): Ground-truth solution to the question… See the full description on the dataset page: https://huggingface.co/datasets/FlagEval/HMMT_2025.

  6. h

    aime_2024_II

    • huggingface.co
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MathArena, aime_2024_II [Dataset]. https://huggingface.co/datasets/MathArena/aime_2024_II
    Explore at:
    Dataset authored and provided by
    MathArena
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Homepage and repository

    Homepage: https://matharena.ai/ Repository: https://github.com/eth-sri/matharena

      Dataset Summary
    

    This dataset contains the questions from AIME II 2024 used for the MathArena Leaderboard

      Data Fields
    

    Below one can find the description of each field in the dataset.

    problem_idx (int): Index of the problem in the competition problem (str): Full problem statement answer (str): Ground-truth answer to the question

      Source Data
    

    The… See the full description on the dataset page: https://huggingface.co/datasets/MathArena/aime_2024_II.

  7. h

    aime_2025

    • huggingface.co
    Updated May 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MathArena (2025). aime_2025 [Dataset]. https://huggingface.co/datasets/MathArena/aime_2025
    Explore at:
    Dataset updated
    May 13, 2025
    Dataset authored and provided by
    MathArena
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Homepage and repository

    Homepage: https://matharena.ai/ Repository: https://github.com/eth-sri/matharena

      Dataset Summary
    

    This dataset contains the questions from AIME 2025 used for the MathArena Leaderboard

      Data Fields
    

    Below one can find the description of each field in the dataset.

    problem_idx (int): Index of the problem in the competition problem (str): Full problem statement answer (str): Ground-truth answer to the question problem_type… See the full description on the dataset page: https://huggingface.co/datasets/MathArena/aime_2025.

  8. h

    aime_2023_I

    • huggingface.co
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MathArena, aime_2023_I [Dataset]. https://huggingface.co/datasets/MathArena/aime_2023_I
    Explore at:
    Dataset authored and provided by
    MathArena
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Homepage and repository

    Homepage: https://matharena.ai/ Repository: https://github.com/eth-sri/matharena

      Dataset Summary
    

    This dataset contains the questions from AIME 2023 I used for the MathArena Leaderboard

      Data Fields
    

    Below one can find the description of each field in the dataset.

    problem_idx (int): Index of the problem in the competition problem (str): Full problem statement answer (str): Ground-truth answer to the question

      Source Data
    

    The… See the full description on the dataset page: https://huggingface.co/datasets/MathArena/aime_2023_I.

  9. h

    hmmt_feb_2025

    • huggingface.co
    Updated May 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MathArena (2025). hmmt_feb_2025 [Dataset]. https://huggingface.co/datasets/MathArena/hmmt_feb_2025
    Explore at:
    Dataset updated
    May 13, 2025
    Dataset authored and provided by
    MathArena
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Homepage and repository

    Homepage: https://matharena.ai/ Repository: https://github.com/eth-sri/matharena

      Dataset Summary
    

    This dataset contains the questions from HMMT February 2025 used for the MathArena Leaderboard

      Data Fields
    

    Below one can find the description of each field in the dataset.

    problem_idx (int): Index of the problem in the competition problem (str): Full problem statement answer (str): Ground-truth answer to the question problem_type… See the full description on the dataset page: https://huggingface.co/datasets/MathArena/hmmt_feb_2025.

  10. h

    ROCO-QA

    • huggingface.co
    Updated Oct 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aditya Shourya (2024). ROCO-QA [Dataset]. https://huggingface.co/datasets/adishourya/ROCO-QA
    Explore at:
    Dataset updated
    Oct 31, 2024
    Authors
    Aditya Shourya
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This split only contains the Validation and the Test Split of @touvron2023. You can find the Train split here : https://huggingface.co/datasets/adishourya/ROCO-QA-Train Generated Question answer pairs with the following prompt: def generate_qapairs_img(caption): prompt = f""" Based on the following medical image captions generate short, appropriate and insightful question for the caption. Treat this caption as the ground truth to generate your question: {caption} """ response =… See the full description on the dataset page: https://huggingface.co/datasets/adishourya/ROCO-QA.

  11. h

    coding

    • huggingface.co
    Updated Apr 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LiveBench (2025). coding [Dataset]. https://huggingface.co/datasets/livebench/coding
    Explore at:
    Dataset updated
    Apr 2, 2025
    Dataset authored and provided by
    LiveBench
    Description

    Dataset Card for "livebench/coding"

    LiveBench is a benchmark for LLMs designed with test set contamination and objective evaluation in mind. It has the following properties:

    LiveBench is designed to limit potential contamination by releasing new questions monthly, as well as having questions based on recently-released datasets, arXiv papers, news articles, and IMDb movie synopses. Each question has verifiable, objective ground-truth answers, allowing hard questions to be scored… See the full description on the dataset page: https://huggingface.co/datasets/livebench/coding.

  12. h

    medical-o1-verifiable-problem

    • huggingface.co
    Updated Dec 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FreedomAI (2024). medical-o1-verifiable-problem [Dataset]. https://huggingface.co/datasets/FreedomIntelligence/medical-o1-verifiable-problem
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 30, 2024
    Dataset authored and provided by
    FreedomAI
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Introduction

    This dataset features open-ended medical problems designed to improve LLMs' medical reasoning. Each entry includes a open-ended question and a ground-truth answer based on challenging medical exams. The verifiable answers enable checking LLM outputs, refining their reasoning processes. For details, see our paper and GitHub repository.

      Citation
    

    If you find our data useful, please consider citing our work! @misc{chen2024huatuogpto1medicalcomplexreasoning… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-verifiable-problem.

  13. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Danielh Carranza (2018). VizWiz [Dataset]. https://www.kaggle.com/ingbiodanielh/vizwiz
Organization logo

VizWiz

Answering Visual Questions from Blind People

Explore at:
zip(30808514012 bytes)Available download formats
Dataset updated
Nov 9, 2018
Authors
Danielh Carranza
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Context

We propose an artificial intelligence challenge to design algorithms that assist people who are blind to overcome their daily visual challenges. For this purpose, we introduce the VizWiz dataset, which originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. Our proposed challenge addresses the following two tasks for this dataset: (1) predict the answer to a visual question and (2) predict whether a visual question cannot be answered. Ultimately, we hope this work will educate more people about the technological needs of blind people while providing an exciting new opportunity for researchers to develop assistive technologies that eliminate accessibility barriers for blind people. http://vizwiz.org/pics/vqa-examples.jpg" alt="vizwiz">

Content

Visual questions are split into three JSON files: train, validation, and test. Answers are publicly shared for the train and validation splits and hidden for the test split. APIs are provided to demonstrate how to parse the JSON files and evaluate methods against the ground truth.

  • 20,000 training image/question pairs
  • 200,000 training answer/answer confidence pairs
  • 3,173 image/question pairs
  • 31,730 validation answer/answer confidence pairs
  • 8,000 image/question pairs
  • Python API to read and visualize the VizWiz dataset
  • Python challenge evaluation code

Acknowledgements

This dataset is from the challenge VizWiz Challenge

Inspiration

I bring this dataset to kaggle because i want to help the blind people and to do it I need help and a lot of people

Search
Clear search
Close search
Google apps
Main menu