Facebook
TwitterDescription The Forest Fire Visual Question Answering (FF-VQA) Dataset is a multimodal dataset designed to train and evaluate AI models capable of interpreting visual information about forest fires and responding to natural language questions. It combines images, audio recordings, and tabular data related to forest fires with corresponding human-annotated questions and answers to enable advanced reasoning, visual understanding, and decision support in environmental monitoring systems. The dataset includes question–answer pairs in two languages: Sinhala and English, supporting multilingual AI research and cross-lingual model development. This dataset aims to advance research in visual reasoning, environmental AI, disaster response automation, and remote sensing analysis. It provides rich, high-quality annotations for supervised training and benchmarking in both computer vision and natural language understanding tasks.
Data Collection Method Images, audio, and tabular data were collected from publicly available wildfire imagery databases and verified open-source repositories, including sources such as the Kaggle Wildfire Archives. Visual Question Answering (VQA) pairs were created, annotated, and validated by domain experts in forest fire research — including professors and forest department officers — to ensure accuracy, contextual relevance, and domain consistency. The question–answer pairs were reviewed for grammatical correctness and linguistic clarity, then categorized into five thematic areas: situation awareness, safety and risk assessment, incident analysis and restoration, prevention and continuous learning, and environmental assessment. Each pair was labeled in both Sinhala and English to support bilingual training and evaluation for multilingual VQA models.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundRecently, the Turing test has been used to investigate whether machines have intelligence similar to humans. Our study aimed to assess the ability of an artificial intelligence (AI) system for spine tumor detection using the Turing test.MethodsOur retrospective study data included 12179 images from 321 patients for developing AI detection systems and 6635 images from 187 patients for the Turing test. We utilized a deep learning-based tumor detection system with Faster R-CNN architecture, which generates region proposals by Region Proposal Network in the first stage and corrects the position and the size of the bounding box of the lesion area in the second stage. Each choice question featured four bounding boxes enclosing an identical tumor. Three were detected by the proposed deep learning model, whereas the other was annotated by a doctor; the results were shown to six doctors as respondents. If the respondent did not correctly identify the image annotated by a human, his answer was considered a misclassification. If all misclassification rates were >30%, the respondents were considered unable to distinguish the AI-detected tumor from the human-annotated one, which indicated that the AI system passed the Turing test.ResultsThe average misclassification rates in the Turing test were 51.2% (95% CI: 45.7%–57.5%) in the axial view (maximum of 62%, minimum of 44%) and 44.5% (95% CI: 38.2%–51.8%) in the sagittal view (maximum of 59%, minimum of 36%). The misclassification rates of all six respondents were >30%; therefore, our AI system passed the Turing test.ConclusionOur proposed intelligent spine tumor detection system has a similar detection ability to annotation doctors and may be an efficient tool to assist radiologists or orthopedists in primary spine tumor detection.
Facebook
Twitter2,504 Images – Chinese Handwriting OCR Data. The writing environment includes A4 paper, square paper, lined paper, white board, color note, answer sheet, etc. The writing contents include poetry, prose, store activity notices, greetings, wish lists, excerpts,composition, notes, etc. The data diversity includes multiple writing papers, multiple fonts, multiple writing contents, multiple photographic angles. The collecting angeles are looking up angle and eye-level angle. For annotation, line-level/column-level quadrilateral bounding box annotation and transcription for the texts were annotated in the data. The dataset can be used for tasks such as Chinese handwriting OCR.
Facebook
TwitterDataset Overview
This dataset is was created from 3088 Vietnamese Sketches 🇻🇳 images from books. Each image has been analyzed and annotated using advanced Visual Question Answering (VQA) techniques to produce a comprehensive dataset. There is a set of 18,000 detailed descriptions, and query-based questions and answers generated by the Gemini 1.5 Flash model, currently Google's leading model on the WildVision Arena Leaderboard. This results in a richly annotated dataset, ideal for… See the full description on the dataset page: https://huggingface.co/datasets/5CD-AI/Viet-Sketches-VQA.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterDescription The Forest Fire Visual Question Answering (FF-VQA) Dataset is a multimodal dataset designed to train and evaluate AI models capable of interpreting visual information about forest fires and responding to natural language questions. It combines images, audio recordings, and tabular data related to forest fires with corresponding human-annotated questions and answers to enable advanced reasoning, visual understanding, and decision support in environmental monitoring systems. The dataset includes question–answer pairs in two languages: Sinhala and English, supporting multilingual AI research and cross-lingual model development. This dataset aims to advance research in visual reasoning, environmental AI, disaster response automation, and remote sensing analysis. It provides rich, high-quality annotations for supervised training and benchmarking in both computer vision and natural language understanding tasks.
Data Collection Method Images, audio, and tabular data were collected from publicly available wildfire imagery databases and verified open-source repositories, including sources such as the Kaggle Wildfire Archives. Visual Question Answering (VQA) pairs were created, annotated, and validated by domain experts in forest fire research — including professors and forest department officers — to ensure accuracy, contextual relevance, and domain consistency. The question–answer pairs were reviewed for grammatical correctness and linguistic clarity, then categorized into five thematic areas: situation awareness, safety and risk assessment, incident analysis and restoration, prevention and continuous learning, and environmental assessment. Each pair was labeled in both Sinhala and English to support bilingual training and evaluation for multilingual VQA models.