Update: OCT-2023
Add v2 with recent SoTA model swinV2 classifier for both soft/hard-label visual_caption_cosine_score_v2 with person label (0.2, 0.3 and 0.4)
Introduction
Modern image captaining relies heavily on extracting knowledge, from images such as objects, to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been… See the full description on the dataset page: https://huggingface.co/datasets/AhmedSSabir/Textual-Image-Caption-Dataset.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By Arto (From Huggingface) [source]
The train.csv file contains a list of image filenames, captions, and the actual images used for training the image captioning models. Similarly, the test.csv file includes a separate set of image filenames, captions, and images specifically designated for testing the accuracy and performance of the trained models.
Furthermore, the valid.csv file contains a unique collection of image filenames with their respective captions and images that serve as an independent validation set to evaluate the models' capabilities accurately.
Each entry in these CSV files includes both a filename string that indicates the name or identifier of an image file stored in another location or directory. Additionally,** each entry also provides a list (or multiple rows) o**f strings representing written descriptions or captions describing each respective image given its filename.
Considering these details about this dataset's structure, it can be immensely valuable to researchers, developers, and enthusiasts working on developing innovative computer vision algorithms such as automatic text generation based on visual content analysis. Whether it's training machine learning models to automatically generate relevant captions based on new unseen images or evaluating existing systems' performance against diverse criteria.
Stay updated with cutting-edge research trends by leveraging this comprehensive dataset containing not only captio**ns but also corresponding imag**es across different sets specifically designed to cater to varied purposes within computer vision tasks. »
Overview of the Dataset
The dataset consists of three primary files:
train.csv
,test.csv
, andvalid.csv
. These files contain information about image filenames and their respective captions. Each file includes multiple captions for each image to support diverse training techniques.Understanding the Files
- train.csv: This file contains filenames (
filename
column) and their corresponding captions (captions
column) for training your image captioning model.- test.csv: The test set is included in this file, which contains a similar structure as that of
train.csv
. The purpose of this file is to evaluate your trained models on unseen data.- valid.csv: This validation set provides images with their respective filenames (
filename
) and captions (captions
). It allows you to fine-tune your models based on performance during evaluation.Getting Started
To begin utilizing this dataset effectively, follow these steps:
- Extract the zip file containing all relevant data files onto your local machine or cloud environment.
- Familiarize yourself with each CSV file's structure:
train.csv
,test.csv
, andvalid.csv
. Understand how information like filename(s) (filename
) corresponds with its respective caption(s) (captions
).- Depending on your specific use case or research goals, determine which portion(s) of the dataset you wish to work with (e.g., only train or train+validation).
- Load the dataset into your preferred programming environment or machine learning framework, ensuring you have the necessary dependencies installed.
- Preprocess the dataset as needed, such as resizing images to a specific dimension or encoding captions for model training purposes.
- Split the data into training, validation, and test sets according to your experimental design requirements.
- Use appropriate algorithms and techniques to train your image captioning models on the provided data.
Enhancing Model Performance
To optimize model performance using this dataset, consider these tips:
- Explore different architectures and pre-trained models specifically designed for image captioning tasks.
- Experiment with various natural language
- Image Captioning: This dataset can be used to train and evaluate image captioning models. The captions can be used as target labels for training, and the images can be paired with the captions to generate descriptive captions for test images.
- Image Retrieval: The dataset can be used for image retrieval tasks where given a query caption, the model needs to retrieve the images that best match the description. This can be useful in applications such as content-based image search.
- Natural Language Processing: The dataset can also be used for natural language processing tasks such as text generation or machine translation. The captions in this dataset are descriptive ...
Dataset Card for "my-image-captioning-dataset"
More Information needed
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset consists of 2000 images used from (PEER Hub Image-Net) and a caption file in text format.
The caption uses a very limited vocabulary set designed to get better results while also describing plenty of information with a length of 18 vocab.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the Gujarati Language Image Captioning Dataset! A collection of Images with associated text captions to facilitate the development of AI models capable of generating high-quality captions for images. This dataset is meticulously crafted to support research and innovation in computer vision and natural language processing.
This dataset features over 5,000 high-resolution images sourced from diverse categories and scenes. Each image is meticulously selected to encompass a wide array of contexts, objects, and environments, ensuring comprehensive coverage for training robust image captioning models.
Each image in the dataset is paired with a high-quality descriptive caption. These captions are carefully crafted to provide detailed and contextually rich descriptions of the images, enhancing the dataset's utility for training sophisticated image captioning algorithms.
Each image-caption pair is accompanied by comprehensive metadata to facilitate informed decision-making in model development:
The Image Captioning Dataset serves various applications across different domains:
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
🌟 Unlock the potential of advanced computer vision tasks with our comprehensive dataset comprising 15,000 high-quality images. Whether you're delving into segmentation, object detection, or image captioning, our dataset offers a diverse array of visual data to fuel your machine learning models.
🔍 Our dataset is meticulously curated to encompass a wide range of streams, ensuring versatility and applicability across various domains. From natural landscapes to urban environments, from wildlife to everyday objects, our collection captures the richness and diversity of visual content.
📊 Dataset Overview:
Total Images | Training Set (70%) | Testing Set (30%) |
---|---|---|
15,000 | 10,500 | 4,500 |
🔢 Image Details:
Embark on your computer vision journey and leverage our dataset to develop cutting-edge algorithms, advance research, and push the boundaries of what's possible in visual recognition tasks. Join us in shaping the future of AI-powered image analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
etc.
tuy20212521/my-image-captioning-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the Danish Language Image Captioning Dataset! A collection of Images with associated text captions to facilitate the development of AI models capable of generating high-quality captions for images. This dataset is meticulously crafted to support research and innovation in computer vision and natural language processing.
This dataset features over 5,000 high-resolution images sourced from diverse categories and scenes. Each image is meticulously selected to encompass a wide array of contexts, objects, and environments, ensuring comprehensive coverage for training robust image captioning models.
Each image in the dataset is paired with a high-quality descriptive caption. These captions are carefully crafted to provide detailed and contextually rich descriptions of the images, enhancing the dataset's utility for training sophisticated image captioning algorithms.
Each image-caption pair is accompanied by comprehensive metadata to facilitate informed decision-making in model development:
The Image Captioning Dataset serves various applications across different domains:
vipulmaheshwari/GTA-Image-Captioning-Dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the French Language Image Captioning Dataset! A collection of Images with associated text captions to facilitate the development of AI models capable of generating high-quality captions for images. This dataset is meticulously crafted to support research and innovation in computer vision and natural language processing.
This dataset features over 5,000 high-resolution images sourced from diverse categories and scenes. Each image is meticulously selected to encompass a wide array of contexts, objects, and environments, ensuring comprehensive coverage for training robust image captioning models.
Each image in the dataset is paired with a high-quality descriptive caption. These captions are carefully crafted to provide detailed and contextually rich descriptions of the images, enhancing the dataset's utility for training sophisticated image captioning algorithms.
Each image-caption pair is accompanied by comprehensive metadata to facilitate informed decision-making in model development:
The Image Captioning Dataset serves various applications across different domains:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SwaFlickr8k dataset is an extension of the well-known Flickr8k dataset, specifically designed for image captioning tasks. It includes a collection of images and corresponding captions written in Swahili. With 8,091 unique images and 40,455 captions, this dataset provides a valuable resource for research and development in the field of image understanding and language processing, particularly in the context of Swahili language.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Remote sensing imagery offers intricate and nuanced data, emphasizing the need for a profound understanding of the relationships among varied geographical elements and events. In this study, we explore the transitions from the image domain to the text domain by employing four state-of-the-art image captioning algorithms, i.e., BLIP, mPLUG, OFA, and X-VLM. Specifically, we investigate 1) the stability of these image captioning algorithms for remote sensing image captioning, 2) the preservation of similarity between images and their corresponding captions, and 3) the characteristics of their caption embedding spaces. The results suggest a moderate consistency across generated captions from different image captioning models, with observable variations contingent upon the urban entities presented. In addition, a dynamic relationship emerges between image space and the corresponding caption space, evidenced by their fluctuated correlation coefficient. Most importantly, patterns within the caption embedding space align with the observed land cover and land use in the image patches, reaffirming the potential of our pilot work as an impactful analytical approach in future remote sensing analytics. We advocate that integrating image captioning techniques with remote sensing imagery paves the way for an innovative data extraction and interpretation approach with diverse applications. This dataset contains the data and code to reproduce this study.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An image captioning dataset including images of humans performing various activities. The included images include the following activities: walking, running, sleeping, swimming, sitting, jumping, riding, climbing, drinking and reading.
This dataset was created by Nabi Nabiyev
It contains the following files:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Urban Planning and Development: This computer vision model could be used by urban planning professionals or architects to understand and analyze sidewalk activity. With its ability to detect various classes such as pedestrians, bicycles, cars, and trees, the software can provide insights into how urban spaces are used and help in designing more efficient and safe environments.
Smart City Applications: The model can be used in Smart City initiatives to automatically analyze and monitor public spaces. For example, monitoring the usage pattern of benches, bikes, or bus stops for intelligent management or detecting any unusual activities on streets, like obstructions due to fallen trees or incorrectly parked vehicles.
Traffic Management and Control: Traffic control systems can use this model to monitor and control traffic flow based on real-time data related to cars, motorbikes, bicycles, buses, and pedestrian movements detected on the zebra crossing and sidewalks.
Accessibility Assessment: NGOs or government agencies focusing on public accessibility and pedestrian safety can use this model to analyze cities' sidewalks. The model can detect elements like benches, trash cans, plant pots, posts, bollards which are essential for assessing sidewalk accessibility, especially for the disabled or elderly citizens.
Augmented Reality (AR) Apps: AR applications developers can use this computer vision model to create more immersive and realistic AR experiences within urban environments. Recognizing real-world objects like trees, people, vehicles, benches, and more could help anchor digital enhancements in physical spaces.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the Polish Language Image Captioning Dataset! A collection of Images with associated text captions to facilitate the development of AI models capable of generating high-quality captions for images. This dataset is meticulously crafted to support research and innovation in computer vision and natural language processing.
This dataset features over 5,000 high-resolution images sourced from diverse categories and scenes. Each image is meticulously selected to encompass a wide array of contexts, objects, and environments, ensuring comprehensive coverage for training robust image captioning models.
Each image in the dataset is paired with a high-quality descriptive caption. These captions are carefully crafted to provide detailed and contextually rich descriptions of the images, enhancing the dataset's utility for training sophisticated image captioning algorithms.
Each image-caption pair is accompanied by comprehensive metadata to facilitate informed decision-making in model development:
The Image Captioning Dataset serves various applications across different domains:
Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
SafeVision:IMAGE CAPTIONING is a dataset for vision language (multimodal) tasks - it contains Safety annotations for 314 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 2.0 (CC BY 2.0)https://creativecommons.org/licenses/by/2.0/
License information was derived automatically
A dataset of 30,000 images with 5 captions per image. The dataset was created by researchers at Stanford University and is used for research in machine learning and natural language processing tasks such as image captioning and visual question answering.
Update: OCT-2023
Add v2 with recent SoTA model swinV2 classifier for both soft/hard-label visual_caption_cosine_score_v2 with person label (0.2, 0.3 and 0.4)
Introduction
Modern image captaining relies heavily on extracting knowledge, from images such as objects, to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been… See the full description on the dataset page: https://huggingface.co/datasets/AhmedSSabir/Textual-Image-Caption-Dataset.