Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset has been created through a survey wherein 267 UG and PG students of Manipal Institute of Technology, participated and annotated 1000 images for its sentiment score on a scale of 7. Each image was presented to at least three annotators. After collecting all the annotations, we took the majority vote out of the three scores for each image; that is an image annotation is considered valid only when at least two of three annotators agree on the exact label (out of 7 labels). This dataset uses following sentiment label-map: 1-Depressed 2-Very Sad 3-Sad 4-Neutral 5-Happy 6-Very Happy 7-Excited
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides a collection of images and their corresponding texts and sentiment which makes it a multi-modal sentiment analysis dataset.
The dataset contains images of 100 different classes of animals and objects, including sharks, birds, lizards, spiders, and more.
This dataset can be used for various computer vision and natural language processing tasks, such as image classification, sentiment analysis, and image captioning.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an image and text dataset for sentiment analysis.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore our unique Multimodal Sentiment Analysis Dataset, featuring high-quality images and corresponding text descriptions with sentiment labels.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Our dataset comprises 1000 tweets, which were taken from Twitter using the Python programming language. The dataset was stored in a CSV file and generated using various modules. The random module was used to generate random IDs and text, while the faker module was used to generate random user names and dates. Additionally, the textblob module was used to assign a random sentiment to each tweet.
This systematic approach ensures that the dataset is well-balanced and represents different types of tweets, user behavior, and sentiment. It is essential to have a balanced dataset to ensure that the analysis and visualization of the dataset are accurate and reliable. By generating tweets with a range of sentiments, we have created a diverse dataset that can be used to analyze and visualize sentiment trends and patterns.
In addition to generating the tweets, we have also prepared a visual representation of the data sets. This visualization provides an overview of the key features of the dataset, such as the frequency distribution of the different sentiment categories, the distribution of tweets over time, and the user names associated with the tweets. This visualization will aid in the initial exploration of the dataset and enable us to identify any patterns or trends that may be present.
Natural Language Processing, Machine Learning Algorithm, Deep Learning
Jannatul Ferdoshi
Institutions: BRAC University
Image Source:Twitter Sentiment Analysis Using Python GeeksforGeeks | lacienciadelcafe.com.ar
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository was created for my Master's thesis in Computational Intelligence and Internet of Things at the University of Córdoba, Spain. The purpose of this repository is to store the datasets found that were used in some of the studies that served as research material for this Master's thesis. Also, the datasets used in the experimental part of this work are included.
Below are the datasets specified, along with the details of their references, authors, and download sources.
----------- STS-Gold Dataset ----------------
The dataset consists of 2026 tweets. The file consists of 3 columns: id, polarity, and tweet. The three columns denote the unique id, polarity index of the text and the tweet text respectively.
Reference: Saif, H., Fernandez, M., He, Y., & Alani, H. (2013). Evaluation datasets for Twitter sentiment analysis: a survey and a new dataset, the STS-Gold.
File name: sts_gold_tweet.csv
----------- Amazon Sales Dataset ----------------
This dataset is having the data of 1K+ Amazon Product's Ratings and Reviews as per their details listed on the official website of Amazon. The data was scraped in the month of January 2023 from the Official Website of Amazon.
Owner: Karkavelraja J., Postgraduate student at Puducherry Technological University (Puducherry, Puducherry, India)
Features:
License: CC BY-NC-SA 4.0
File name: amazon.csv
----------- Rotten Tomatoes Reviews Dataset ----------------
This rating inference dataset is a sentiment classification dataset, containing 5,331 positive and 5,331 negative processed sentences from Rotten Tomatoes movie reviews. On average, these reviews consist of 21 words. The first 5331 rows contains only negative samples and the last 5331 rows contain only positive samples, thus the data should be shuffled before usage.
This data is collected from https://www.cs.cornell.edu/people/pabo/movie-review-data/ as a txt file and converted into a csv file. The file consists of 2 columns: reviews and labels (1 for fresh (good) and 0 for rotten (bad)).
Reference: Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115–124, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics
File name: data_rt.csv
----------- Preprocessed Dataset Sentiment Analysis ----------------
Preprocessed amazon product review data of Gen3EcoDot (Alexa) scrapped entirely from amazon.in
Stemmed and lemmatized using nltk.
Sentiment labels are generated using TextBlob polarity scores.
The file consists of 4 columns: index, review (stemmed and lemmatized review using nltk), polarity (score) and division (categorical label generated using polarity score).
DOI: 10.34740/kaggle/dsv/3877817
Citation: @misc{pradeesh arumadi_2022, title={Preprocessed Dataset Sentiment Analysis}, url={https://www.kaggle.com/dsv/3877817}, DOI={10.34740/KAGGLE/DSV/3877817}, publisher={Kaggle}, author={Pradeesh Arumadi}, year={2022} }
This dataset was used in the experimental phase of my research.
File name: EcoPreprocessed.csv
----------- Amazon Earphones Reviews ----------------
This dataset consists of a 9930 Amazon reviews, star ratings, for 10 latest (as of mid-2019) bluetooth earphone devices for learning how to train Machine for sentiment analysis.
This dataset was employed in the experimental phase of my research. To align it with the objectives of my study, certain reviews were excluded from the original dataset, and an additional column was incorporated into this dataset.
The file consists of 5 columns: ReviewTitle, ReviewBody, ReviewStar, Product and division (manually added - categorical label generated using ReviewStar score)
License: U.S. Government Works
Source: www.amazon.in
File name (original): AllProductReviews.csv (contains 14337 reviews)
File name (edited - used for my research) : AllProductReviews2.csv (contains 9930 reviews)
----------- Amazon Musical Instruments Reviews ----------------
This dataset contains 7137 comments/reviews of different musical instruments coming from Amazon.
This dataset was employed in the experimental phase of my research. To align it with the objectives of my study, certain reviews were excluded from the original dataset, and an additional column was incorporated into this dataset.
The file consists of 10 columns: reviewerID, asin (ID of the product), reviewerName, helpful (helpfulness rating of the review), reviewText, overall (rating of the product), summary (summary of the review), unixReviewTime (time of the review - unix time), reviewTime (time of the review (raw) and division (manually added - categorical label generated using overall score).
Source: http://jmcauley.ucsd.edu/data/amazon/
File name (original): Musical_instruments_reviews.csv (contains 10261 reviews)
File name (edited - used for my research) : Musical_instruments_reviews2.csv (contains 7137 reviews)
Facebook
Twitterhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
Explore the sentiments behind internet memes with this diverse dataset of 6992 meme images. The set is aimed at tasks like sentiment classification and majority voting using any six classifiers of your choice (three for images, three for text) from the sklearn library.
Outputs should include confusion matrix, accuracy, recall, precision, and F1-measure, providing a comprehensive overview of classifier performance. Ideal for those interested in multimodal data, social media analysis, NLP, image/text classification, text mining, machine learning, deep learning, and sentiment analysis.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset consists of sentiments shared on COVID-19 between November 1, 2021 to January 31, 2022. The dataset is used to analyze sentiments shared on COVID-19 and can be applied in other machine learning algorithms.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The ColorEmoNet dataset has been constructed using foundational concepts from colour theory to explore the relationship between colours and emotions.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The Movie Sentiment and Rating Images Dataset is a comprehensive collection of images representing movie posters, accompanied by sentiment labels and user ratings. The dataset is designed to facilitate research and exploration in the domain of sentiment analysis and rating prediction based on visual content, particularly movie poster images.
Key Features:
Images:
The dataset includes approx 33,000 high-resolution movie poster images. Each image provides a visual representation of a movie, typically derived from its promotional material. Sentiment Labels:
Sentiment labels are assigned to each movie poster, reflecting the emotional tone or sentiment conveyed by the image. Sentiment labels may include categories such as positive, negative, neutral, or a more granular set of emotions. User Ratings:
User ratings are associated with each movie in the dataset, indicating the numeric evaluation given by viewers. Ratings may follow a scale (e.g., 1 to 5 stars) and provide insights into the perceived quality or popularity of the movie. Potential Use Cases:
Sentiment Analysis:
Researchers and practitioners can leverage the dataset for sentiment analysis tasks, and training models to predict sentiment based on movie poster images. Rating Prediction:
The dataset enables the development and evaluation of models for predicting user ratings from visual content, offering insights into viewer preferences. Content-Based Recommender Systems:
The combination of sentiment labels and ratings makes the dataset suitable for exploring content-based recommender systems for movies. Deep Learning and Computer Vision Research:
Researchers in deep learning and computer vision can use the dataset to investigate image-based sentiment analysis and rating prediction challenges. Acknowledgments: Include any acknowledgments or credits for the sources of the dataset, if applicable.
Note: Provide any additional information or specific details about the dataset that may be relevant for users, such as data format, licensing, or preprocessing steps.
By providing a clear and informative description, users can better understand the contents and potential applications of your Movie Sentiment and Rating Images Dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).
The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.
IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.
Facebook
TwitterThis dataset was created by hana veranloo
Released under Other (specified in description)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
natural language processing
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The dataset consists of 6 distinct emotions : Happy, Angry, Sad, Neutral, Surprise and Ahegao. Images are RGB and presented as cropped faces with corresponding emotions. The images were collected by scrapping social nets as Facebook and Instagram, scrapping YouTube videos and already available datasets as IMDB and AffectNet. 1) dataset.zip contains folders with corresponding classes. 2) data.csv contains pathes to images and corresponding labels.
Kovenko, Volodymyr; Shevchuk, Vitalii (2021), “OAHEGA : EMOTION RECOGNITION DATASET”, Mendeley Data, V2, doi: 10.17632/5ck5zz6f2c.2
Facebook
TwitterSource: http://www.t4sa.it/ Discretion: I do not own this dataset, if there's any privacy issue please let me know I will take it down. Description: The data collection process took place from July to December 2016, lasting around 6 months in total. During this time span, we exploited Twitter's Sample API to access a random 1% sample of the stream of all globally produced tweets, discarding:
If you have used our data or trained models in a scientific publication, we would appreciate citations to the following paper:
@InProceedings{Vadicamo_2017_ICCVW, author = {Vadicamo, Lucia and Carrara, Fabio and Cimino, Andrea and Cresci, Stefano and Dell'Orletta, Felice and Falchi, Fabrizio and Tesconi, Maurizio}, title = {Cross-Media Learning for Image Sentiment Analysis in the Wild}, booktitle = {2017 IEEE International Conference on Computer Vision Workshops (ICCVW)}, pages={308-317}, doi={10.1109/ICCVW.2017.45}, month = {Oct}, year = {2017} }
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A lexicon of 751 emoji characters with automatically assigned sentiment. The sentiment is computed from 70,000 tweets, labeled by 83 human annotators in 13 European languages. The Emoji Sentiment Ranking web page at http://kt.ijs.si/data/Emoji_sentiment_ranking/ is automatically generated from the data provided in this repository. The process and analysis of emoji sentiment ranking is described in the paper: P. Kralj Novak, J. Smailović, B. Sluban, I. Mozetič, Sentiment of Emojis, submitted; arXiv preprint, http://arxiv.org/abs/1509.07761, 2015.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset consists of images capturing people displaying 7 distinct emotions (anger, contempt, disgust, fear, happiness, sadness and surprise). Each image in the dataset represents one of these specific emotions, enabling researchers and machine learning practitioners to study and develop models for emotion recognition and analysis. The images encompass a diverse range of individuals, including different genders, ethnicities, and age groups*. The dataset aims to provide a comprehensive representation of human emotions, allowing for a wide range of use cases.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This submission contains trained end-to-end models for the Neural Monkey toolkit for Czech and English, solving three NLP tasks: machine translation, image captioning, and sentiment analysis. The models are trained on standard datasets and achieve state-of-the-art or near state-of-the-art performance in the tasks. The models are described in the accompanying paper. The same models can also be invoked via the online demo: https://ufal.mff.cuni.cz/grants/lsd
There are several separate ZIP archives here, each containing one model solving one of the tasks for one language.
To use a model, you first need to install Neural Monkey: https://github.com/ufal/neuralmonkey To ensure correct functioning of the model, please use the exact version of Neural Monkey specified by the commit hash stored in the 'git_commit' file in the model directory.
Each model directory contains a 'run.ini' Neural Monkey configuration file, to be used to run the model. See the Neural Monkey documentation to learn how to do that (you may need to update some paths to correspond to your filesystem organization). The 'experiment.ini' file, which was used to train the model, is also included. Then there are files containing the model itself, files containing the input and output vocabularies, etc.
For the sentiment analyzers, you should tokenize your input data using the Moses tokenizer: https://pypi.org/project/mosestokenizer/
For the machine translation, you do not need to tokenize the data, as this is done by the model.
For image captioning, you need to: - download a trained ResNet: http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz - clone the git repository with TensorFlow models: https://github.com/tensorflow/models - preprocess the input images with the Neural Monkey 'scripts/imagenet_features.py' script (https://github.com/ufal/neuralmonkey/blob/master/scripts/imagenet_features.py) -- you need to specify the path to ResNet and to the TensorFlow models to this script
Feel free to contact the authors of this submission in case you run into problems!
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Emotion Image Classification Dataset
This dataset is designed for emotion recognition from facial images, enabling machine learning and deep learning models to classify human emotions based on facial expressions.It can be used for applications such as affective computing, sentiment analysis, and human-computer interaction.
Dataset Overview
DatasetDict({ train: Dataset({
features: ['image', 'label'],
num_rows: 28709
})
test: Dataset({
features:… See the full description on the dataset page: https://huggingface.co/datasets/Keyurjotaniya007/img-emotion-classification.
Facebook
Twitterhttps://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the South Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.
The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:
To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:
Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:
This metadata helps in building expression recognition models that are both accurate and inclusive.
This dataset is ideal for a variety of AI and computer vision applications, including:
To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset has been created through a survey wherein 267 UG and PG students of Manipal Institute of Technology, participated and annotated 1000 images for its sentiment score on a scale of 7. Each image was presented to at least three annotators. After collecting all the annotations, we took the majority vote out of the three scores for each image; that is an image annotation is considered valid only when at least two of three annotators agree on the exact label (out of 7 labels). This dataset uses following sentiment label-map: 1-Depressed 2-Very Sad 3-Sad 4-Neutral 5-Happy 6-Very Happy 7-Excited