Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
ChatGPT has taken the world by storm, setting a record for the fastest app to reach a 100 million users, which it hit in two months. The implications of this tool are far-reaching, universities...
In a survey conducted across **** Southeast Asian countries in February 2023, almost half of the respondents selected collection of personal data as one of the concerns they had regarding the usage of chatbots like ChatGPT. In contrast, ethical issues related to data privacy and intellectual property were a concern for ** percent of the respondents.
According to a survey of adults in the United States conducted in January 2023, ** percent of respondents used ChatGPT to generate text themselves. In comparison, overall ** percent of the female respondents claimed to have never used nor seen anyone else use it, while ** percent of respondents reported having seen text being generated by the AI technology for someone else.
ChatGPT is used most widely among those between ** and ** around the world. The youngest group, those under **, are the second largest userbase, and together those under ** account for over ** percent of ChatGPT users. It is perhaps unsurprising that the younger age brackets use the chatbot more than older as that is the common trend with new technologies. Male users were far more numerous than female users, with males representing over ** percent of total users in 2023.
Adults with the highest education level - particularly with a postgraduate degree - had the greatest level of familiarity with ChatGPT, or ** percent having some knowledge. The program, developed by startup OpenAI, was of far less concern to those with high school degrees or lower education. When looking at respondents with a little knowledge of ChatGPT, the ******* are far less drastically different. It is quite likely that the considerable coverage of the ChatGPT topic in media had an impact, giving most people some awareness of the topic.
As of 2023, about ** percent of the global population who are familiar with ChatGPT were using the tool at least once a month, while over ** percent reported using it weekly. Indian respondents were the most frequent users, having just over ** percent of their respondents claimed to use ChatGPT every day.
Between the 9th and 15th of April 2023, per 100,000 employees, *** cases of sensitive data leaking on ChatGPT were spotted in worldwide companies. Compared to an observation between February and March 2023, the figure had increased by around ** percent. The second-most common type of confidential data shared on ChatGPT was source code, with *** cases per 100,000 employees.
https://www.gesis.org/en/institute/data-usage-termshttps://www.gesis.org/en/institute/data-usage-terms
A major challenge of our time is reducing disparities in access to and effective use of digital technologies, with recent discussions highlighting the role of AI in exacerbating the digital divide. We examine user characteristics that predict usage of the AI-powered conversational agent ChatGPT. We combine behavioral and survey data in a web tracked sample of N=1376 German citizens to investigate differences in ChatGPT activity (usage, visits, and adoption) during the first 11 months from the launch of the service (November 30, 2022). Guided by a model of technology acceptance (UTAUT-2), we examine the role of socio-demographics commonly associated with the digital divide in ChatGPT activity and explore further socio-political attributes identified via stability selection in Lasso regressions. We confirm that lower age and higher education affect ChatGPT usage, but neither gender nor income do. We find full-time employment and more children to be barriers to ChatGPT activity. Using a variety of social media was positively associated with ChatGPT activity. In terms of political variables, political knowledge and political self-efficacy as well as some political behaviors such as voting, debating political issues online and offline and political action online were all associated with ChatGPT activity, with online political debating and political self-efficacy negatively so. Finally, need for cognition and communication skills such as writing, attending meetings, or giving presentations, were also associated with ChatGPT engagement, though chairing/organizing meetings was negatively associated. Our research informs efforts to address digital disparities and promote digital literacy among underserved populations by presenting implications, recommendations, and discussions on ethical and social issues of our findings.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the data needed to reproduce all results and figures described in "ChatGPT performance on radiation technologist and therapist entry to practice exams".
Details about the data collection can be found in the paper referenced below. Briefly, ChatGPT (GPT-4) was prompted with multiple choice questions from 4 practice exams provided by the Canadian Association of Medical Radiation Technologists (CAMRT). ChatGPT was promted with the questions from each exam 5 times between July 17 and August 13, 2023. Table 1, below, provides details about the dates for data collection.
Variable descriptions
question: Question number, provided by CAMRT. Skipped question numbers indicate image-based questions that were excluded from the study.
discipline: Indicates the CAMRT exam discipline, abbreviated as follows
RAD: radiological technology
MRI: magnetic resonance
NUC: nuclear medicine
RTT: radiation therapy
question_type: Indicates the type of competency being assessed by the question (Knowledge, Application, or Critical thinking). Competency categories were assigned by CAMRT.
corrrect_response: The correct multiple choice response ("A", "B", "C", or "D"), assigned by CAMRT.
attempt1-5: ChatGPT's response to the multiple choice questions for attempts 1 through 5, indicated using the letters "A", "B", "C", or "D". In a few cases, ChatGPT did not provide a reference to a multiple choice response and "NA" is recorded in the dataset.
Note: The long-form questions from CAMRT and answers provided by ChatGPT are not available as a part of this dataset.
Table 1: Dates for data collection
Attempt 1 Attempt 2 Attempt 3 Attempt 4 Attempt 5
Radiological technology 2 Aug 2023 2 Aug 2023 8 Aug 2023 9 Aug 2023 11 Aug 2023
Magnetic resonance 17 Jul 2023 18 Jul 2023 18 Jul 2023 9 Aug 2023 12 Aug 2023
Nuclear medicine 8 Aug 2023 9 Aug 2023 12 Aug 2023 12 Aug 2023 12 Aug 2023
Radiation therapy 9 Aug 2023 12 Aug 2023 12 Aug 2023 13 Aug 2023 13 Aug 2023
This dataset contains the 30 questions that were posed to the chatbots (i) ChatGPT-3.5; (ii) ChatGPT-4; and (iii) Google Bard, in May 2023 for the study “Chatbots put to the test in math and logic problems: A preliminary comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard”. These 30 questions describe mathematics and logic problems that have a unique correct answer. The questions are fully described with plain text only, without the need for any images or special formatting. The questions are divided into two sets of 15 questions each (Set A and Set B). The questions of Set A are 15 “Original” problems that cannot be found online, at least in their exact wording, while Set B contains 15 “Published” problems that one can find online by searching on the internet, usually with their solution. Each question is posed three times to each chatbot.
This dataset contains the following: (i) The full set of the 30 questions, A01-A15 and B01-B15; (ii) the correct answer for each one of them; (iii) an explanation of the solution, for the problems where such an explanation is needed, (iv) the 30 (questions) × 3 (chatbots) × 3 (answers) = 270 detailed answers of the chatbots. For the published problems of Set B, we also provide a reference to the source where each problem was taken from.
Objective: Our objective is to evaluate the efficacy of ChatGPT 4 in accurately and effectively delivering genetic information, building on previous findings with ChatGPT 3.5. We focus on assessing the utility, limitations, and ethical implications of using ChatGPT in medical settings. Materials and Methods: A structured questionnaire, including the Brief User Survey (BUS-15) and custom questions, was developed to assess ChatGPT 4's clinical value. An expert panel of genetic counselors and clinical geneticists independently evaluated ChatGPT 4's responses to these questions. We also involved comparative analysis with ChatGPT 3.5, utilizing descriptive statistics and using R for data analysis. Results: ChatGPT 4 demonstrated improvements over 3.5 in context recognition, relevance, and informativeness. However, performance variability and concerns about the naturalness of the output were noted. No significant difference in accuracy was found between ChatGPT 3.5 and 4.0. Notably, the effic..., Study Design This study was conducted to evaluate the performance of ChatGPT 4 (March 23rd, 2023)  Model) in the context of genetic counseling and education. The evaluation involved a structured questionnaire, which included questions selected from the Brief User Survey (BUS-15) and additional custom questions designed to assess the clinical value of ChatGPT 4's responses. Questionnaire Development The questionnaire was built on Qualtrics, which comprised twelve questions: seven selected from the BUS-15 preceded by two additional questions that we designed. The initial questions focused on quality and answer relevancy: 1.    The overall quality of the Chatbot’s response is: (5-point Likert: Very poor to Very Good) 2.    The Chatbot delivered an answer that provided the relevant information you would include if asked the question. (5-point Likert: Strongly disagree to Strongly agree) The BUS-15 questions (7-point Likert: Strongly disagree to Strongly agree) focused on: 1.    Recogniti..., , # A comparative evaluation of ChatGPT 3.5 and ChatGPT 4 in responses to selected genetics questions - Full study data
https://doi.org/10.5061/dryad.s4mw6m9cv
This data was captured when evaluating the ability of ChatGPT to address questions patients may ask it about three genetic conditions (BRCA1, HFE, and MLH1). This data is associated with the JAMIA article of the similar name with the DOIÂ 10.1093/jamia/ocae128
The rapid advancements in generative AI models present new opportunities in the education sector. However, it is imperative to acknowledge and address the potential risks and concerns that may arise with their use. We collected Twitter data to identify key concerns related to the use of ChatGPT in education. This dataset is used to support the study "ChatGPT in education: A discourse analysis of worries and concerns on social media."
In this study, we particularly explored two research questions. RQ1 (Concerns): What are the key concerns that Twitter users perceive with using ChatGPT in education? RQ2 (Accounts): Which accounts are implicated in the discussion of these concerns? In summary, our study underscores the importance of responsible and ethical use of AI in education and highlights the need for collaboration among stakeholders to regulate AI policy.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset Card for "Collective Cognition ChatGPT Conversations"
Dataset Description
Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and… See the full description on the dataset page: https://huggingface.co/datasets/CollectiveCognition/chats-data-2023-10-16.
https://sqmagazine.co.uk/privacy-policy/https://sqmagazine.co.uk/privacy-policy/
When Google unveiled Gemini AI, the tech world paused. Not just because it was another artificial intelligence launch, but because it promised something more: a multi-modal future. It was December 2023 when Sundar Pichai described Gemini not merely as a chatbot or assistant but as an evolving platform built to...
In 2023, more than***** of Polish respondents had no opinion on whether ChatGPT would store wrong information in the algorithm's database.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The introduction of ChatGPT in November 2022 marked a significant milestone in the application of artificial intelligence in higher education. Due to its advanced natural language processing capabilities, ChatGPT quickly became popular among students worldwide. However, the increasing acceptance of ChatGPT among students has attracted significant attention, sparking both excitement and skepticism globally. In order to capture early students' perceptions about ChatGPT, the most comprehensive and large-scale global survey to date was conducted between the beginning of October 2023 and the end of February 2024. The questionnaire was prepared in seven different languages: English, Italian, Spanish, Turkish, Japanese, Arabic, and Hebrew. It covered several aspects relevant to ChatGPT, including sociodemographic characteristics, usage, capabilities, regulation and ethical concerns, satisfaction and attitude, study issues and outcomes, skills development, labor market and skills mismatch, emotions, study and personal information, and general reflections. The survey targeted higher education students who are currently enrolled at any level in a higher education institution, are at least 18 years old, and have the legal capacity to provide free and voluntary consent to participate in an anonymous survey. Survey participants were recruited using a convenience sampling method, which involved promoting the survey in classrooms and through advertisements on university communication systems. The final dataset consists of 23,218 student responses from 109 different countries and territories. The data may prove useful for researchers studying students' perceptions of ChatGPT, including its implications across various aspects. Moreover, also higher education stakeholders may benefit from these data. While educators may benefit from the data in formulating curricula, including designing teaching methods and assessment tools, policymakers may consider the data when formulating strategies for higher education system development in the future.
Arts and Humanities, Applied Sciences, Natural Sciences, Social Sciences, Mathematics, Health Sciences
Article
https://www.covidsoclab.org/chatgpt-student-survey/ is related to this dataset
https://www.1ka.si/d/en is related to this dataset
Dejan Ravšelj , et. al
Data Source: Mendeley Dataset
ChatGPT has captured the attention of the academic world with its remarkable ability to write, summarize, and even pass rigorous exams. This article provides a brief summary of the primary concerns of political science faculty with ChatGPT and similar AI software with regard to academia. Additionally, we discuss results of a national survey of political scientists conducted in March of 2023 to assess faculty attitudes towards ChatGPT and their strategies for effectively engaging with it in the classroom. Next, we present several assignment ideas that limit the potential for cheating with ChatGPT, a primary concern of faculty, and provide opportunities for incorporating ChatGPT into faculty teaching. Finally, several suggestions for syllabi addressing political science students' use of ChatGPT are provided.
git clone https://github.com/sherbold/chatgpt-student-essay-study.git
code of "A large-scale comparison of human-written versus ChatGPT-generated essays" Sci Rep 13, 18617 (2023) (Open access)
for license see: https://zenodo.org/records/8343644
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The introduction of ChatGPT in November 2022 marked a significant milestone in the application of artificial intelligence in higher education. Due to its advanced natural language processing capabilities, ChatGPT quickly became popular among students worldwide. However, the increasing acceptance of ChatGPT among students has attracted significant attention, sparking both excitement and scepticism globally. Building on the early students' perceptions of ChatGPT after the first year of introduction, a comprehensive and large-scale global survey was repeated between October 2024 and February 2025. The questionnaire was distributed in seven different languages: English, Italian, Spanish, Turkish, Japanese, Arabic, and Hebrew. It covered several aspects relevant to ChatGPT, including sociodemographic characteristics, usage, capabilities, regulation and ethical concerns, satisfaction and attitude, study issues and outcomes, skills development, labour market and skills mismatch, emotions, study and personal information, and general reflections. The survey targeted higher education students who are currently enrolled at any level in a higher education institution, are at least 18 years old, and have the legal capacity to provide free and voluntary consent to participate in an anonymous survey. Survey participants were recruited using a convenience sampling method, which involved promoting the survey in classrooms and through advertisements on university communication systems. The final dataset consists of 22,963 student responses from 120 different countries and territories. The data may prove useful for researchers studying students' perceptions of ChatGPT, including its implications across various aspects. Moreover, also higher education stakeholders may benefit from these data. While educators may benefit from the data in formulating curricula, including designing teaching methods and assessment tools, policymakers may consider the data when formulating strategies for higher education system development in the future.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Eighteen pairs of prompt-response with different the LLM-based ChatGPT 3.5 chatbots exploring differences in responses aimed at young women vs young men, in the context of the STEM gap. Conversations were conducted in Spring 2023, exported as images and bundled in a zip file. The zip file also contains a spreadsheet with the text version of prompts and responses, as well as quantitative and qualitative comparative analysis.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
ChatGPT has taken the world by storm, setting a record for the fastest app to reach a 100 million users, which it hit in two months. The implications of this tool are far-reaching, universities...