Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset presents ChatGPT usage patterns across different age groups, showing the percentage of users who have followed its advice, used it without following advice, or have never used it, based on a 2025 U.S. survey.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset is focused on exploring the relationship between the performance of Chegg's stock prices and the growth of ChatGPT users over time. Chegg is an education technology company that provides online learning resources. The company has experienced significant growth in recent years, driven in part by COVID-19.
However, Chegg's stock price dropped due to the shift of some of its users from Chegg's platform to ChatGPT. This shift in user behavior can be attributed to ChatGPT's advanced AI capabilities, which allow it to provide personalized and accurate assistance to users.
The dataset includes five tables that provide valuable insights into the relationship between Chegg stock prices and ChatGPT user growth, with a particular focus on the impact of the user shift on Chegg's stock performance. The first three tables contain weekly, monthly, and daily data on Chegg's stock performance, including information on the opening and closing prices, highest and lowest prices, and trading volume. These tables also include information on significant events that may have impacted the company's stock prices, such as product launches, partnerships, and earnings reports.
The fourth table provides data on the number of ChatGPT users recorded over the past months. This table includes information on the total number of users, as well as data on user growth rates and trends. The data in this table can be used to identify correlations between ChatGPT user growth and changes in Chegg's stock performance.
The fifth and final table provides the latest updates on ChatGPT, including information on new features, updates, and user feedback. This table is designed to keep the dataset current and relevant, providing users with the latest information on ChatGPT and its impact on Chegg's stock performance.
Overall, this dataset provides a valuable resource for anyone interested in understanding the impact of user behavior on the stock performance of companies like Chegg that operate in the education technology sector. It offers a comprehensive view of the data and trends over time, which can be used to identify patterns and correlations that can inform investment decisions and strategic planning.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset shows the types of advice users sought from ChatGPT based on a 2025 U.S. survey, including education, financial, medical, and legal topics.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This synthetically generated dataset provides a realistic AI performance comparison between ChatGPT (GPT-4-turbo) and DeepSeek (DeepSeek-Chat 1.5) over a 1.5-year period. With 10,000+ rows, it captures key user interaction metrics, platform performance indicators, and AI response characteristics to analyze trends in accuracy, engagement, and adoption.
📜 License: MIT – Free for research, projects, and analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset presents ChatGPT usage patterns across U.S. Census regions, based on a 2025 nationwide survey. It tracks how often users followed, partially used, or never used ChatGPT by state region.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset presents how much users trust ChatGPT across different advice categories, including career, education, financial, legal, and medical advice, based on a 2025 U.S. survey.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
ChatGPT Gemini Claude Perplexity Human Evaluation Multi Aspect Review Dataset
Introduction
Human evaluation and reviews with scalar score of AI Services responses are very usefuly in LLM Finetuning, Human Preference Alignment, Few-Shot Learning, Bad Case Shooting, etc, but extremely difficult to collect. This dataset is collected from DeepNLP AI Service User Review panel (http://www.deepnlp.org/store), which is an open review website for users to give reviews and upload… See the full description on the dataset page: https://huggingface.co/datasets/DeepNLP/ChatGPT-Gemini-Claude-Perplexity-Human-Evaluation-Multi-Aspects-Review-Dataset.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset mainly consists of daily-updated user reviews and ratings for the ChatGPT Android App. It also contains data on the relevancy of these reviews and the dates they were posted.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
*both authors contributed equally
Automated query script for automated language bias studies in GPT 3-5
Dataset of the paper "How User Language Affects Conflict Fatality Estimates in ChatGPT" preprint available on ArXiv
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset summarizes how ChatGPT users rated the outcomes of the advice they received, including whether it was helpful, harmful, neutral, or uncertain, based on a 2025 U.S. survey.
https://choosealicense.com/licenses/cc0-1.0/https://choosealicense.com/licenses/cc0-1.0/
🧠 Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts View All Prompts on GitHub
License
CC-0
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Within a year of its launch, ChatGPT has seen a surge in popularity. While many are drawn to its effectiveness and user-friendly interface, ChatGPT also introduces moral concerns, such as the temptation to present generated text as one’s own. This led us to theorize that personality traits such as Machiavellianism and sensation-seeking may be predictive of ChatGPT usage. We launched two online questionnaires with 2,000 respondents each, in September 2023 and March 2024, respectively. In Questionnaire 1, 22% of respondents were students, and 54% were full-time employees; 32% indicated they used ChatGPT at least weekly. Analysis of our ChatGPT Acceptance Scale revealed two factors, Effectiveness and Concerns, which correlated positively and negatively, respectively, with ChatGPT use frequency. A specific aspect of Machiavellianism (manipulation tactics) was found to predict ChatGPT usage. Questionnaire 2 was a replication of Questionnaire 1, with 21% students and 54% full-time employees, of which 43% indicated using ChatGPT weekly. In Questionnaire 2, more extensive personality scales were used. We found a moderate correlation between Machiavellianism and ChatGPT usage (r = .22) and with an opportunistic attitude towards undisclosed use (r = .30), relationships that largely remained intact after controlling for gender, age, education level, and the respondents’ country. We conclude that covert use of ChatGPT is associated with darker personality traits, something that requires further attention.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Survey data on ChatGPT Earns the Confidence of Users
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset Card for "Collective Cognition ChatGPT Conversations"
Dataset Description
Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and… See the full description on the dataset page: https://huggingface.co/datasets/CollectiveCognition/chats-data-2023-10-16.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset has been used to write a book chapter on the topic of "Classifying User Intent for Effective Prompt Engineering: A Case of a Chatbot for Startup Teams". The dataset contains the following five resources:Startup questions and intent classifications- This resource demonstrates a list of possible questions and the classification of those questions into four intents i.e. reflecting on own experience, seeking information, brainstorming, and seeking advicePrompt_Book_v1- The file contains a brief guide on how questions are classified, a description of prompt patterns and templates, and lastly matching purpose-prompt patternQuestions_classification_script- The Python script used in our work to classify user intentSurvey_questionnaire- The original survey questions asked from the participantssurvey_responses- Survey responses from study respondents
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
A new study published in JAMA Network Open revealed that ChatGPT-4 outperformed doctors in diagnosing medical conditions from case reports. The AI chatbot scored an average of 92% in the study, while doctors using the chatbot scored 76% and those without it scored 74%.
The study involved 50 doctors (26 attending, 24 residents; median years in practice, 3 [IQR, 2-8]) who were given six case histories and graded on their ability to suggest diagnoses and explain their reasoning. The results showed that doctors often stuck to their initial diagnoses even when the chatbot suggested a better one, highlighting an overconfidence bias. Additionally, many doctors didn't fully utilise the chatbot's capabilities, treating it like a search engine instead of leveraging its ability to analyse full case histories.
The study raises questions about how doctors think and how AI tools can be best integrated into medical practice. While AI has the potential to be a "doctor extender," providing valuable second opinions, the study suggests that more training and a shift in mindset may be needed for doctors to fully embrace and benefit from these advancements. link
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F13231939%2F4e4c6a4ce9f191ab32e660c726c5204f%2FScreenshot%202024-12-05%2013.33.30.png?generation=1733490846716451&alt=media" alt="">
The study compares the diagnostic reasoning performance of physicians using a commercial LLM AI chatbot (ChatGPT Plus [GPT-4]: OpenAl) compared with conventional diagnostic resources (eg, UpToDate, Google): - ***Conventional Resources*-Only Group (Doctor on Own):** This group refers to doctors using only conventional resources (likely standard medical tools and knowledge) without the assistance of an LLM (large language model). - Doctor With LLM Group: This group involves doctors using conventional resources along with an LLM, which could be a tool or AI assistant helping with diagnostic reasoning. - ***LLM Alone* Group:** This group refers to the use of the LLM on its own, without any conventional resources or doctor intervention.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F13231939%2F7360932a01d641b6adc3594b2e5cae11%2FScreenshot%202024-12-06%2012.11.05.png?generation=1733490890087478&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F13231939%2F7e14a7c648febf04ac657f8dc51ea796%2FScreenshot%202024-12-06%2012.11.58.png?generation=1733490908679868&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F13231939%2F9b9d165a7c69b1a5624186b7904c46c0%2FScreenshot%202024-12-06%2012.12.41.png?generation=1733490932343833&alt=media" alt="">
A Markdown document with the R code for the above plots. link
This study reveals a fascinating and potentially transformative dynamic between artificial intelligence and human medical expertise. While ChatGPT-4 demonstrated remarkable diagnostic accuracy, surpassing even experienced physicians, the study also highlighted critical challenges in integrating AI into clinical practice.
The findings suggest that: - AI can significantly enhance diagnostic accuracy: LLMs like ChatGPT-4 have the potential to revolutionise how medical diagnoses are made, offering a level of accuracy exceeding current practices. - Human factors remain crucial: Overconfidence bias and under-utilisation of AI tools by physicians underscore the need for training and a shift in mindset to effectively leverage these advancements. Doctors must learn to collaborate with AI, viewing it as a powerful partner rather than a simple search engine. - Further research is needed: This study provides a crucial starting point for further investigation into the optimal integration of AI into healthcare. Future research should explore: - Effective training methods for physicians to utilise AI tools. - The impact of AI assistance on patient outcomes. - Ethical considerations surrounding the use of AI in medicine. - The potential for AI to address healthcare disparities.
Ultimately, the successful integration of AI into healthcare will depend not only on technological advancements but also on a willingness among medical professionals to embrace new ways of thinking and working. By harnessing the power of AI while recognising the essential role of human expertise, we can strive towards a future where medical care is more accurate, efficient, and accessible for all.
Patrick Ford 🥼🩺🖥
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionIn recent years, numerous AI tools have been employed to equip learners with diverse technical skills such as coding, data analysis, and other competencies related to computational sciences. However, the desired outcomes have not been consistently achieved. This study aims to analyze the perspectives of students and professionals from non-computational fields on the use of generative AI tools, augmented with visualization support, to tackle data analytics projects. The focus is on promoting the development of coding skills and fostering a deep understanding of the solutions generated. Consequently, our research seeks to introduce innovative approaches for incorporating visualization and generative AI tools into educational practices.MethodsThis article examines how learners perform and their perspectives when using traditional tools vs. LLM-based tools to acquire data analytics skills. To explore this, we conducted a case study with a cohort of 59 participants among students and professionals without computational thinking skills. These participants developed a data analytics project in the context of a Data Analytics short session. Our case study focused on examining the participants' performance using traditional programming tools, ChatGPT, and LIDA with GPT as an advanced generative AI tool.ResultsThe results shown the transformative potential of approaches based on integrating advanced generative AI tools like GPT with specialized frameworks such as LIDA. The higher levels of participant preference indicate the superiority of these approaches over traditional development methods. Additionally, our findings suggest that the learning curves for the different approaches vary significantly. Since learners encountered technical difficulties in developing the project and interpreting the results. Our findings suggest that the integration of LIDA with GPT can significantly enhance the learning of advanced skills, especially those related to data analytics. We aim to establish this study as a foundation for the methodical adoption of generative AI tools in educational settings, paving the way for more effective and comprehensive training in these critical areas.DiscussionIt is important to highlight that when using general-purpose generative AI tools such as ChatGPT, users must be aware of the data analytics process and take responsibility for filtering out potential errors or incompleteness in the requirements of a data analytics project. These deficiencies can be mitigated by using more advanced tools specialized in supporting data analytics tasks, such as LIDA with GPT. However, users still need advanced programming knowledge to properly configure this connection via API. There is a significant opportunity for generative AI tools to improve their performance, providing accurate, complete, and convincing results for data analytics projects, thereby increasing user confidence in adopting these technologies. We hope this work underscores the opportunities and needs for integrating advanced LLMs into educational practices, particularly in developing computational thinking skills.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset shows how men and women in the U.S. reported using ChatGPT in a 2025 survey, including whether they followed its advice or chose not to use it.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was compiled to support a comparative analysis of two artificial intelligence models, ChatGPT and DeepSeek, in executing academic-related commands within the context of higher education. The data was collected using a Likert-scale questionnaire designed around the key dimensions of the DeLone & McLean Information Systems Success Model, which include System Quality (SQ), Information Quality (IQ), Service Quality (SEQ), Intention to Use (ITU), User Satisfaction (US), and Individual Impact (II).
https://ec.europa.eu/info/legal-notice_enhttps://ec.europa.eu/info/legal-notice_en
IMC Student Questionnaire as a primary data source for Master Thesis on 9-2023
A survey instrument is administered to students at IMC Fachhochschule Krems who have firsthand experience with ChatGPT in their academic courses. This questionnaire is designed to collect quantitative data on various dimensions, including student learning outcomes, instructor support, student proficiency in using ChatGPT, and their understanding of machine learning processes. To ensure a representative sample, a stratified random sampling approach is employed, covering diverse courses and academic disciplines.
154 students from IMC FH Krems have participated, answering 17 questions Each table represents answers to the question mentioned in the third column of each table
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset presents ChatGPT usage patterns across different age groups, showing the percentage of users who have followed its advice, used it without following advice, or have never used it, based on a 2025 U.S. survey.