Facebook
TwitterVQG is a collection of datasets for visual question generation. VQG questions were collected by crowdsourcing the task on Amazon Mechanical Turk (AMT). The authors provided details on the prompt and the specific instructions for all the crowdsourcing tasks in this paper in the supplementary material. The prompt was successful at capturing nonliteral questions. Images were taken from the MSCOCO dataset.
Facebook
Twitterhttps://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Explore the historical Whois records related to vqg.in (Domain). Get insights into ownership history and changes over time.
Facebook
TwitterbattleMaster/VQG-subset-20k dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterbagdad0101/vqg-bangla-images dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterVQA1.0 is a dataset used to derive VQG data, consisting of 82783 training images, 40504 validation images, and 81434 testing images, where each image has 3 associated questions.
Facebook
TwitterIdentifying lesions in colonoscopy images is one of medicine's most famous artificial intelligence applications. Until now, the research has focused on single-image or video analysis. With this task, we aim to bring a new aspect to the field by adding multiple modalities to the picture. The task's primary focus will be answering and generating questions. The goal is that through the combination of text and image data, the analysis output gets easier to use by medical experts. The task has three sub-tasks.
For the visual question answering (VQA), the participants must combine images and text answers to answer the questions. In the visual question generation (VQG) subtask, the participants are asked to generate text questions from a given image and answer. Example questions for VQA and VQG: How many polyps are in the image? Are there any polyps in the image? What disease is visible in the image? The third subtask is the visual location question answering (VLQA), where the participants get an image and a question and are required to answer it by providing a segmentation mask for the image. Example questions are: Where in the image is the polyp? Where in the image is the normal and the diseased part? What part of the image shows normal mucosa?
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2024503%2F93cbbf8ced7ce5424e8537b0631d1c99%2FVQA_0.png?generation=1708721778126283&alt=media" alt="">
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterVQG is a collection of datasets for visual question generation. VQG questions were collected by crowdsourcing the task on Amazon Mechanical Turk (AMT). The authors provided details on the prompt and the specific instructions for all the crowdsourcing tasks in this paper in the supplementary material. The prompt was successful at capturing nonliteral questions. Images were taken from the MSCOCO dataset.