32 datasets found
  1. d

    Replication Data for: ChatGPT on ChatGPT: An Exploratory Analysis of its...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wang, Jieshu; Kiran, Elif; S.R. Aurora (also known as Mai P. Trinh); Simeone, Michael; Lobo, José (2024). Replication Data for: ChatGPT on ChatGPT: An Exploratory Analysis of its Performance in the Public Sector Workforce [Dataset]. http://doi.org/10.7910/DVN/P3CDHS
    Explore at:
    Dataset updated
    Sep 24, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Wang, Jieshu; Kiran, Elif; S.R. Aurora (also known as Mai P. Trinh); Simeone, Michael; Lobo, José
    Description

    This repository contains two datasets used in the study exploring the impact of Generative AI, specifically ChatGPT, on the public sector workforce in the United States. The datasets provide detailed information on the core tasks of public sector occupations and their estimated performance metrics, including potential for automation and augmentation by ChatGPT. These estimations are generated by OpenAI’s GPT-4 model (GPT-4-1106-preview) through OpenAI API.

  2. f

    Data Sheet 2_Large language models generating synthetic clinical datasets: a...

    • frontiersin.figshare.com
    xlsx
    Updated Feb 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Austin A. Barr; Joshua Quan; Eddie Guo; Emre Sezgin (2025). Data Sheet 2_Large language models generating synthetic clinical datasets: a feasibility and comparative analysis with real-world perioperative data.xlsx [Dataset]. http://doi.org/10.3389/frai.2025.1533508.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 5, 2025
    Dataset provided by
    Frontiers
    Authors
    Austin A. Barr; Joshua Quan; Eddie Guo; Emre Sezgin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundClinical data is instrumental to medical research, machine learning (ML) model development, and advancing surgical care, but access is often constrained by privacy regulations and missing data. Synthetic data offers a promising solution to preserve privacy while enabling broader data access. Recent advances in large language models (LLMs) provide an opportunity to generate synthetic data with reduced reliance on domain expertise, computational resources, and pre-training.ObjectiveThis study aims to assess the feasibility of generating realistic tabular clinical data with OpenAI’s GPT-4o using zero-shot prompting, and evaluate the fidelity of LLM-generated data by comparing its statistical properties to the Vital Signs DataBase (VitalDB), a real-world open-source perioperative dataset.MethodsIn Phase 1, GPT-4o was prompted to generate a dataset with qualitative descriptions of 13 clinical parameters. The resultant data was assessed for general errors, plausibility of outputs, and cross-verification of related parameters. In Phase 2, GPT-4o was prompted to generate a dataset using descriptive statistics of the VitalDB dataset. Fidelity was assessed using two-sample t-tests, two-sample proportion tests, and 95% confidence interval (CI) overlap.ResultsIn Phase 1, GPT-4o generated a complete and structured dataset comprising 6,166 case files. The dataset was plausible in range and correctly calculated body mass index for all case files based on respective heights and weights. Statistical comparison between the LLM-generated datasets and VitalDB revealed that Phase 2 data achieved significant fidelity. Phase 2 data demonstrated statistical similarity in 12/13 (92.31%) parameters, whereby no statistically significant differences were observed in 6/6 (100.0%) categorical/binary and 6/7 (85.71%) continuous parameters. Overlap of 95% CIs were observed in 6/7 (85.71%) continuous parameters.ConclusionZero-shot prompting with GPT-4o can generate realistic tabular synthetic datasets, which can replicate key statistical properties of real-world perioperative data. This study highlights the potential of LLMs as a novel and accessible modality for synthetic data generation, which may address critical barriers in clinical data access and eliminate the need for technical expertise, extensive computational resources, and pre-training. Further research is warranted to enhance fidelity and investigate the use of LLMs to amplify and augment datasets, preserve multivariate relationships, and train robust ML models.

  3. m

    Composing alt text using large language models: dataset in English

    • data.mendeley.com
    Updated Jun 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yekaterina Kosova (2024). Composing alt text using large language models: dataset in English [Dataset]. http://doi.org/10.17632/szh5zhpgxh.1
    Explore at:
    Dataset updated
    Jun 17, 2024
    Authors
    Yekaterina Kosova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains the results of developing alternative text for images using chatbots based on large language models. The study was carried out in April-June 2024. Microsoft Copilot, Google Gemini, and YandexGPT chatbots were used to generate 108 text descriptions for 12 images. Descriptions were generated by chatbots using keywords specified by a person. The experts then rated the resulting descriptions on a Likert scale (from 1 to 5). The data set is presented in a Microsoft Excel table on the “Data” sheet with the following fields: record number; image number; chatbot; image type (photo, logo); request date; list of keywords; number of keywords; length of keywords; time of compilation of keywords; generated descriptions; required length of descriptions; actual length of descriptions; description generation time; usefulness; reliability; completeness; accuracy; literacy. The “Images” sheet contains links to the original images. Alternative descriptions are presented in English.

  4. f

    S2 Fig -

    • plos.figshare.com
    txt
    Updated Nov 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Qiu; Youlian Zhou (2024). S2 Fig - [Dataset]. http://doi.org/10.1371/journal.pone.0311937.s002
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 20, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Jun Qiu; Youlian Zhou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundChatGPT, developed by OpenAI, is an artificial intelligence software designed to generate text-based responses. The objective of this study is to evaluate the accuracy and consistency of ChatGPT’s responses to single-choice questions pertaining to carbon monoxide poisoning. This evaluation will contribute to our understanding of the reliability of ChatGPT-generated information in the medical field.MethodsThe questions utilized in this study were selected from the "Medical Exam Assistant (Yi Kao Bang)" application and encompassed a range of topics related to carbon monoxide poisoning. A total of 44 single-choice questions were included in the study following a screening process. Each question was entered into ChatGPT ten times in Chinese, followed by a translation into English, where it was also entered ten times. The responses generated by ChatGPT were subjected to statistical analysis with the objective of assessing their accuracy and consistency in both languages. In this assessment process, the "Medical Exam Assistant (Yi Kao Bang)" reference responses were employed as benchmarks. The data analysis was conducted using the Python.ResultsIn approximately 50% of the cases, the responses generated by ChatGPT exhibited a high degree of consistency, whereas in approximately one-third of the cases, the responses exhibited unacceptable blurring of the answers. Meanwhile, the accuracy of these responses was less favorable, with an accuracy rate of 61.1% in Chinese and 57% in English. This indicates that ChatGPT could be enhanced with respect to both consistency and accuracy in responding to queries pertaining to carbon monoxide poisoning.ConclusionsIt is currently evident that the consistency and accuracy of responses generated by ChatGPT regarding carbon monoxide poisoning is inadequate. Although it offers significant insights, it should not supersede the role of healthcare professionals in making clinical decisions.

  5. o

    ChatGPT Early Adoption in Higher Education: Variation in Student Usage,...

    • openicpsr.org
    delimited
    Updated Mar 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Arum (2025). ChatGPT Early Adoption in Higher Education: Variation in Student Usage, Instructional Support and Educational Equity [Dataset]. http://doi.org/10.3886/E222781V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    Mar 13, 2025
    Dataset provided by
    University of California, Irvine
    Authors
    Richard Arum
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data for this study were collected at the University of California – Irvine (UCI) as part of the UCI-MUST (Measuring Undergraduate Success Trajectories) Project, a larger longitudinal measurement project aimed at improving understanding of undergraduate experience, trajectories and outcomes, while supporting campus efforts to improve institutional performance and enhance educational equity (Arum et. al. 2021). The project is focused on student educational experience at a selective large, research-oriented public university on the quarter system with half of its students first-generation and 85 percent Hispanic, Asian, African-American, Pacific Islander or Native American. Since Fall 2019, the project has tracked annually new cohorts of freshmen and juniors with longitudinal surveys administered at the end of every academic quarter. Data from the Winter 2023 end of term assessment, administered in the first week of April, was pooled for four longitudinal study cohorts for this study (i.e., Fall 2019-2022 cohorts). There was an overall response rate of 42.5 percent for the Winter 2023 end of term assessment. This allowed us to consider student responses from freshmen through senior years enrolled in courses throughout the university. Students completed questionnaire items about their knowledge and use of ChatGPT in and out of the classroom during the winter 2023 academic term. In total 1,129 students completed the questionnaire, which asked questions about: knowledge of ChatGPT (“Do you know what ChatGPT is?”); general use (“Have you used ChatGPT before?”); and instructor attitude (“What was the attitude of the instructor for [a specific course students enrolled in] regarding the use of ChatGPT?”). Of those 1,129 students, 191 had missing data for at least one variable of interest and were subsequently dropped from analysis, resulting in a final sample of 938 students. In addition, for this study we merged our survey data with administrative data from campus that encompasses details on student background, including gender, race, first-generation college-going, and international student status. Campus administrative data also provides course-level characteristics, including whether a particular class is a lower- or upper-division course as well as the academic unit on campus offering the course. In addition, we used administrative data on all students enrolled at the university to generate classroom composition measures for every individual course taken by students in our sample – specifically the proportion of underrepresented minority students in the class, the proportion of international students in the class and the proportion of female students in the class. For our student-level analysis [R1], we used binary logistic regressions to examine the association between individual characteristics and (1) individual awareness and (2) individual academic use of ChatGPT utilizing the student-level data of 938 students. Individual characteristics include gender, underrepresented minority student status, international student status, first generation college-going student status, student standing (i.e. lower or upper classmen), cumulative grade point average and field of study. Field of study was based on student major assigned to the broad categories of physical sciences (i.e. physical sciences, engineering, and information and computer science), health sciences (i.e. pharmacy, biological sciences, public health, and nursing), humanities, social sciences (i.e. business, education, and social sciences), the arts, or undeclared. We defined awareness of ChatGPT as an affirmative response to the question “Do you know what ChatGPT is?” Regarding ChatGPT use, we focused on academic use which was defined as an affirmative response of either “Yes, for academic use” or “Yes, for academic and personal use” to the question “Have you used ChatGPT before?” For our course-level analysis [R2], we constructed a measure – course-level instructor encouragement for ChatGPT use – based on student responses to the end of the term survey conducted at the completion of the Winter 2023 term. In the survey, students were asked to indicate the extent to which their instructors encouraged them to use ChatGPT in each of their enrolled courses. The response

  6. h

    awesome-chatgpt-prompts

    • huggingface.co
    Updated Dec 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fatih Kadir Akın (2023). awesome-chatgpt-prompts [Dataset]. https://huggingface.co/datasets/fka/awesome-chatgpt-prompts
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 15, 2023
    Authors
    Fatih Kadir Akın
    License

    https://choosealicense.com/licenses/cc0-1.0/https://choosealicense.com/licenses/cc0-1.0/

    Description

    🧠 Awesome ChatGPT Prompts [CSV dataset]

    This is a Dataset Repository of Awesome ChatGPT Prompts View All Prompts on GitHub

      License
    

    CC-0

  7. 4

    Supplementary data for the paper 'Personality and acceptance as predictors...

    • data.4tu.nl
    zip
    Updated Mar 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joost de Winter; Dimitra Dodou; Yke Bauke Eisma (2024). Supplementary data for the paper 'Personality and acceptance as predictors of ChatGPT use' [Dataset]. http://doi.org/10.4121/e2e3ac25-e264-4592-b413-254eb4ac5022.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 28, 2024
    Dataset provided by
    4TU.ResearchData
    Authors
    Joost de Winter; Dimitra Dodou; Yke Bauke Eisma
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Within a year of its launch, ChatGPT has seen a surge in popularity. While many are drawn to its effectiveness and user-friendly interface, ChatGPT also introduces moral concerns, such as the temptation to present generated text as one’s own. This led us to theorize that personality traits such as Machiavellianism and sensation-seeking may be predictive of ChatGPT usage. We launched two online questionnaires with 2,000 respondents each, in September 2023 and March 2024, respectively. In Questionnaire 1, 22% of respondents were students, and 54% were full-time employees; 32% indicated they used ChatGPT at least weekly. Analysis of our ChatGPT Acceptance Scale revealed two factors, Effectiveness and Concerns, which correlated positively and negatively, respectively, with ChatGPT use frequency. A specific aspect of Machiavellianism (manipulation tactics) was found to predict ChatGPT usage. Questionnaire 2 was a replication of Questionnaire 1, with 21% students and 54% full-time employees, of which 43% indicated using ChatGPT weekly. In Questionnaire 2, more extensive personality scales were used. We found a moderate correlation between Machiavellianism and ChatGPT usage (r = .22) and with an opportunistic attitude towards undisclosed use (r = .30), relationships that largely remained intact after controlling for gender, age, education level, and the respondents’ country. We conclude that covert use of ChatGPT is associated with darker personality traits, something that requires further attention.

  8. A comparative evaluation of ChatGPT 3.5 and ChatGPT 4 in responses to...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jun 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scott McGrath (2024). A comparative evaluation of ChatGPT 3.5 and ChatGPT 4 in responses to selected genetics questions - Full study data [Dataset]. http://doi.org/10.5061/dryad.s4mw6m9cv
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2024
    Dataset provided by
    University of California, Berkeley
    Authors
    Scott McGrath
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: Our objective is to evaluate the efficacy of ChatGPT 4 in accurately and effectively delivering genetic information, building on previous findings with ChatGPT 3.5. We focus on assessing the utility, limitations, and ethical implications of using ChatGPT in medical settings. Materials and Methods: A structured questionnaire, including the Brief User Survey (BUS-15) and custom questions, was developed to assess ChatGPT 4's clinical value. An expert panel of genetic counselors and clinical geneticists independently evaluated ChatGPT 4's responses to these questions. We also involved comparative analysis with ChatGPT 3.5, utilizing descriptive statistics and using R for data analysis. Results: ChatGPT 4 demonstrated improvements over 3.5 in context recognition, relevance, and informativeness. However, performance variability and concerns about the naturalness of the output were noted. No significant difference in accuracy was found between ChatGPT 3.5 and 4.0. Notably, the efficacy of ChatGPT 4 varied significantly across different genetic conditions, with specific differences identified between responses related to BRCA1 and HFE. Discussion and Conclusion: This study highlights ChatGPT 4's potential in genomics, noting significant advancements over its predecessor. Despite these improvements, challenges remain, including the risk of outdated information and the necessity of ongoing refinement. The variability in performance across different genetic conditions underscores the need for expert oversight and continuous AI training. ChatGPT 4, while showing promise, emphasizes the importance of balancing technological innovation with ethical responsibility in healthcare information delivery. Methods Study Design This study was conducted to evaluate the performance of ChatGPT 4 (March 23rd, 2023) Model) in the context of genetic counseling and education. The evaluation involved a structured questionnaire, which included questions selected from the Brief User Survey (BUS-15) and additional custom questions designed to assess the clinical value of ChatGPT 4's responses. Questionnaire Development The questionnaire was built on Qualtrics, which comprised twelve questions: seven selected from the BUS-15 preceded by two additional questions that we designed. The initial questions focused on quality and answer relevancy: 1. The overall quality of the Chatbot’s response is: (5-point Likert: Very poor to Very Good) 2. The Chatbot delivered an answer that provided the relevant information you would include if asked the question. (5-point Likert: Strongly disagree to Strongly agree) The BUS-15 questions (7-point Likert: Strongly disagree to Strongly agree) focused on: 1. Recognition and facilitation of users’ goal and intent: Chatbot seems able to recognize the user’s intent and guide the user to its goals. 2. Relevance of information: The chatbot provides relevant and appropriate information/answer to people at each stage to make them closer to their goal. 3. Maxim of quantity: The chatbot responds in an informative way without adding too much information. 4. Resilience to failure: Chatbot seems able to find ways to respond appropriately even when it encounters situations or arguments it is not equipped to handle. 5. Understandability and politeness: The chatbot seems able to understand input and convey correct statements and answers without ambiguity and with acceptable manners. 6. Perceived conversational credibility: The chatbot responds in a credible and informative way without adding too much information. 7. Meet the neurodiverse needs: Chatbot seems able to meet needs and be used by users independently form their health conditions, well-being, age, etc. Expert Panel and Data Collection A panel of experts (two genetic counselors and two clinical geneticists) was provided with a link to the survey containing the questions. They independently evaluated the responses from ChatGPT 4 without discussing the questions or answers among themselves until after the survey submission. This approach ensured unbiased evaluation.

  9. H

    ChatGPT examples in the hydrological sciences

    • hydroshare.org
    • beta.hydroshare.org
    zip
    Updated Oct 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dylan Irvine (2023). ChatGPT examples in the hydrological sciences [Dataset]. http://doi.org/10.4211/hs.fc0552275ea14c7082218c42ebd63da6
    Explore at:
    zip(1.3 MB)Available download formats
    Dataset updated
    Oct 9, 2023
    Dataset provided by
    HydroShare
    Authors
    Dylan Irvine
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    WGS 84 EPSG:4326,
    Description

    ChatGPT has forever changed the way that many industries operate. Much of the focus of Artificial Intelligence (AI) has been on their ability to generate text. However, it is likely that their ability to generate computer codes and scripts will also have a major impact. We demonstrate the use of ChatGPT to generate Python scripts to perform hydrological analyses and highlight the opportunities, limitations and risks that AI poses in the hydrological sciences.

    Here, we provide four worked examples of the use of ChatGPT to generate scripts to conduct hydrological analyses. We also provide a full list of the libraries available to the ChatGPT Advanced Data Analysis plugin (only available in the paid version). These files relate to a manuscript that is to be submitted to Hydrological Processes. The authors of the manuscript are Dylan J. Irvine, Landon J.S. Halloran and Philip Brunner.

    If you find these examples useful and/or use them, we would appreciate if you could cite the associated publication in Hydrological Processes. Details to be made available upon final publication.

  10. a

    Coding Index by Models Model

    • artificialanalysis.ai
    Updated Feb 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artificial Analysis (2025). Coding Index by Models Model [Dataset]. https://artificialanalysis.ai/models
    Explore at:
    Dataset updated
    Feb 15, 2025
    Dataset authored and provided by
    Artificial Analysis
    Description

    Comparison of Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench & SciCode) by Model

  11. f

    Data_Sheet_1_The scholarly footprint of ChatGPT: a bibliometric analysis of...

    • frontiersin.figshare.com
    docx
    Updated Jan 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Faiza Farhat; Emmanuel Sirimal Silva; Hossein Hassani; Dag Øivind Madsen; Shahab Saquib Sohail; Yassine Himeur; M. Afshar Alam; Aasim Zafar (2024). Data_Sheet_1_The scholarly footprint of ChatGPT: a bibliometric analysis of the early outbreak phase.docx [Dataset]. http://doi.org/10.3389/frai.2023.1270749.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jan 5, 2024
    Dataset provided by
    Frontiers
    Authors
    Faiza Farhat; Emmanuel Sirimal Silva; Hossein Hassani; Dag Øivind Madsen; Shahab Saquib Sohail; Yassine Himeur; M. Afshar Alam; Aasim Zafar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This paper presents a comprehensive analysis of the scholarly footprint of ChatGPT, an AI language model, using bibliometric and scientometric methods. The study zooms in on the early outbreak phase from when ChatGPT was launched in November 2022 to early June 2023. It aims to understand the evolution of research output, citation patterns, collaborative networks, application domains, and future research directions related to ChatGPT. By retrieving data from the Scopus database, 533 relevant articles were identified for analysis. The findings reveal the prominent publication venues, influential authors, and countries contributing to ChatGPT research. Collaborative networks among researchers and institutions are visualized, highlighting patterns of co-authorship. The application domains of ChatGPT, such as customer support and content generation, are examined. Moreover, the study identifies emerging keywords and potential research areas for future exploration. The methodology employed includes data extraction, bibliometric analysis using various indicators, and visualization techniques such as Sankey diagrams. The analysis provides valuable insights into ChatGPT's early footprint in academia and offers researchers guidance for further advancements. This study stimulates discussions, collaborations, and innovations to enhance ChatGPT's capabilities and impact across domains.

  12. p

    AI-Driven Mental Health Literacy - An Interventional Study from India (Data...

    • psycharchives.org
    Updated Oct 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). AI-Driven Mental Health Literacy - An Interventional Study from India (Data from main study).csv [Dataset]. https://psycharchives.org/handle/20.500.12034/8771
    Explore at:
    Dataset updated
    Oct 2, 2023
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    India
    Description

    The dataset is from an Indian study which made use of ChatGPT- a natural language processing model by OpenAI to design a mental health literacy intervention for college students. Prompt engineering tactics were used to formulate prompts that acted as anchors in the conversations with the AI agent regarding mental health. An intervention lasting for 20 days was designed with sessions of 15-20 minutes on alternative days. Fifty-one students completed pre-test and post-test measures of mental health literacy, mental help-seeking attitude, stigma, mental health self-efficacy, positive and negative experiences, and flourishing in the main study, which were then analyzed using paired t-tests. The results suggest that the intervention is effective among college students as statistically significant changes were noted in mental health literacy and mental health self-efficacy scores. The study affirms the practicality, acceptance, and initial indications of AI-driven methods in advancing mental health literacy and suggests the promising prospects of innovative platforms such as ChatGPT within the field of applied positive psychology.: Data used in analysis for the intervention study

  13. a

    Intelligence vs. Context Window by Models Model

    • artificialanalysis.ai
    Updated Feb 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artificial Analysis (2025). Intelligence vs. Context Window by Models Model [Dataset]. https://artificialanalysis.ai/models
    Explore at:
    Dataset updated
    Feb 15, 2025
    Dataset authored and provided by
    Artificial Analysis
    Description

    Comparison of Artificial Analysis Intelligence Index vs. Context Window (Tokens) by Model

  14. H

    Replication Data for: Revisiting Weimar Film Reviewers’ Sentiments:...

    • dataverse.harvard.edu
    Updated Jun 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isadora Campregher Paiva; Josephine Diecke (2024). Replication Data for: Revisiting Weimar Film Reviewers’ Sentiments: Integrating Lexicon-Based Sentiment Analysis with Large Language Models [Dataset]. http://doi.org/10.7910/DVN/8NINQK
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 27, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Isadora Campregher Paiva; Josephine Diecke
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Weimar
    Description

    This dataset contains one excel table (corpus_reviews.xlsx) related to 80 historical film reviews of three Weimar films: "Das Cabinet des Dr. Caligari" (1920), "Nosferatu" (1922) and "Metropolis" (1927). The table also includes metadata related to the origin of the reviews and their full text in their original languages and in English translation. Furthermore, it contains the results of a range of methods for sentiment analysis of the reviews, including manual judgments and different approaches to automated sentiment analysis. The python code used to implement these is included. We first undertake a verbose sentiment analysis of the reviews by running the same prompt over each review through the OpeanAI API (GPT_API_all_reviews.ipynb). A less successful attempt, showing the results of the same prompt using the open source HuggingChat is also included (huggingChat_API_reviews.ipynb). We then apply a lexicon-based sentiment analysis (with Python’s NLTK library and its VADER lexicon) to the result of ChatGPT’s analysis and to the reviews directly (sentiment_analysis.ipynb). We then compare the results (results_analysis.ipynb).

  15. m

    Artificial Intelligence Adoption Prediction Model: Would ChatGPT-3.5 be...

    • data.mendeley.com
    Updated Feb 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christa van Staden (2024). Artificial Intelligence Adoption Prediction Model: Would ChatGPT-3.5 be adopted in English poetry classrooms? [Dataset]. http://doi.org/10.17632/289jtphg33.2
    Explore at:
    Dataset updated
    Feb 26, 2024
    Authors
    Christa van Staden
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is version 2 of the dataset created and used to explore ChatGPT-3.5's ability to write, justify and analyse English poems. This version was created after the reviewers decision that this paper may be published, if some changes are made.

    The purpose of the research was to determine if ChatGPT-3.5 would be adopted in English poetry classrooms. As none of the theoretical models were applicable, the Artificial Intelligence Adoption Prediction Model (AIAPM) was designed. Based on this model, an Artificial Intelligence Adoption Prediction tool (AIAPT) was designed to calculate an Adoption Prediction Score (APS). Then, ChatGPT-3.5's ability to write, justify and analyse poems were explored.

    It was found that ChatGPT-3.5 could write, justify, and analyse poems, but it could also make errors and hallucinate convincingly. Thus, the AIAPT was used to calculate the Adoption Prediction Score. The APS was 9, thus all factors of the AIAPM could drive the adoption decision. Thus, it could be predicted that ChatGPT-3.5 would be adopted in English poetry classrooms, both for ethical and unethical purposes. Based on the results, a few pro-active strategies were suggested.

    This dataset contains all data created and used during the research, including the poems which were integrated in the paper: "An Artificial Intelligence Adoption Prediction Model to determine if ChatGPT-3.5 would be adopted in English poetry classrooms" which was submitted toe Heliyon for publication.

  16. a

    Intelligence vs. End-to-End Seconds to Output 100 Tokens by Models Model

    • artificialanalysis.ai
    Updated Feb 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artificial Analysis (2025). Intelligence vs. End-to-End Seconds to Output 100 Tokens by Models Model [Dataset]. https://artificialanalysis.ai/models
    Explore at:
    Dataset updated
    Feb 15, 2025
    Dataset authored and provided by
    Artificial Analysis
    Description

    Comparison of Artificial Analysis Intelligence Index vs. End-to-End Seconds to Output 100 Tokens by Model

  17. Protein Market Size & Share Analysis - Industry Research Report - Growth...

    • mordorintelligence.com
    pdf,excel,csv,ppt
    Updated Jul 19, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mordor Intelligence (2018). Protein Market Size & Share Analysis - Industry Research Report - Growth Trends [Dataset]. https://www.mordorintelligence.com/industry-reports/global-protein-market
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    Jul 19, 2018
    Dataset authored and provided by
    Mordor Intelligence
    License

    https://www.mordorintelligence.com/privacy-policyhttps://www.mordorintelligence.com/privacy-policy

    Time period covered
    2017 - 2030
    Area covered
    Global
    Description

    The Protein Market is segmented by Source (Animal, Microbial, Plant), by End User (Animal Feed, Food and Beverages, Personal Care and Cosmetics, Supplements) and by Region (Africa, Asia-Pacific, Europe, Middle East, North America, South America). Market value in USD and market volume in tonnes are presented. Key data points observed include the market volume of end-user segments, per capita consumption, and raw material production.

  18. A

    AI Recruitment Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Jan 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). AI Recruitment Market Report [Dataset]. https://www.promarketreports.com/reports/ai-recruitment-market-8275
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Jan 17, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The size of the AI Recruitment Market was valued at USD 617.5 Million in 2023 and is projected to reach USD 985.10 Million by 2032, with an expected CAGR of 6.9% during the forecast period. The AI recruitment market is thus the integration of artificially intelligent technologies for recruitment, a way of enabling organizations to manage the attraction process, screening processes, and selecting the right individual. AI recruitment tools work based on machine learning, natural language processing, and predictive analytics while automating workloads such as resume screening, interview scheduling, or candidate sourcing activities. Some of the most prominent features of AI recruitment are candidate matching algorithms, chatbots for candidate engagement, and data-driven insights to optimize hiring decisions. The major employing technologies for AI recruitment include AI-driven applicant tracking systems (ATS), sentiment analysis tools, and predictive analytics platforms. The impact of AI recruitment is extreme because it immensely improves efficiency and reduces biases by accelerating the process of hiring people. Benefits through AI recruitment clearly show improvements towards enhancing candidate experience, reducing the time-to-hire, and improving hiring quality. One big growth driver is the increasing need for cost-effective, scalable solutions by organizations to handle talent acquisition and other needs to face the continually changing labor market. As businesses look to improve their recruitment outcomes, AI technologies offer competitive advantages through better decision-making and streamlined processes. Recent developments include: February 2023: - Under it’s recently established 1000 Pioneers initiative, Quantgene is inviting past entrepreneurs and startup veterans to apply for a new position. In order to create a layer of revolutionary firms, 1000 Pioneers and Pioneerland are starting with the healthcare industry., February 2023:- Employment in Many Industries Will Become Obsolete Due to ChatGPT and Future AI Bots. Unusually clever chatbot ChatGPT has been made available to the public as a free tool by a research facility supported by Microsoft.. Key drivers for this market are: . Need for managing quality assurance of software for better customer experience, . Need for cost effective software development process. Potential restraints include: . Concern over data security and privacy.

  19. D

    Digital Healthcare Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Dec 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2024). Digital Healthcare Market Report [Dataset]. https://www.promarketreports.com/reports/digital-healthcare-market-6512
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Dec 23, 2024
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Digital Healthcare Market comprises a range of products, including:Tele-healthcare: Remote healthcare servicesm-Health: Mobile health applicationsHealthcare Analytics: Data analysis for healthcare decision-makingDigital Health System: Integrated digital health platforms Recent developments include: April 2023: eClinicalWorks brought ChatGPT and AI models into EHR through investing USD 100 million to Microsoft Azure cloud services. This significant investment provided eClinicalWorks with access to the most recent innovations available through Microsoft Cloud. eClinicalWorks has integrated its EHR with ChatGPT, cognitive services, and machine learning models from Azure OpenAI Service to improve its technology offerings., April 2023: Athenahealth unveiled the Athenahealth Patient Digital Engagement Index, a novel measurement tool for medical practices. The goal of the Index is to help providers evaluate and improve how they interact with and support their patients so that both can move toward a more digital, high-tech experience that will ultimately lead to better patient care.. Key drivers for this market are: RISING ADOPTION OF EHRS AND EMRS, GROWING GOVERNMENT INITIATIVES. Potential restraints include: HIGH COST OF DEPLOYMENT OF DIGITAL HEALTH SOLUTIONS, PRIVACY AND SECURITY CONCERNS. Notable trends are: Rising adoption of EHRS and EMRS.

  20. f

    Accuracy of answers and Shannon entropy when asking ChatGPT in English and...

    • figshare.com
    xls
    Updated Nov 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Qiu; Youlian Zhou (2024). Accuracy of answers and Shannon entropy when asking ChatGPT in English and Chinese. [Dataset]. http://doi.org/10.1371/journal.pone.0311937.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 20, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Jun Qiu; Youlian Zhou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accuracy of answers and Shannon entropy when asking ChatGPT in English and Chinese.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Wang, Jieshu; Kiran, Elif; S.R. Aurora (also known as Mai P. Trinh); Simeone, Michael; Lobo, José (2024). Replication Data for: ChatGPT on ChatGPT: An Exploratory Analysis of its Performance in the Public Sector Workforce [Dataset]. http://doi.org/10.7910/DVN/P3CDHS

Replication Data for: ChatGPT on ChatGPT: An Exploratory Analysis of its Performance in the Public Sector Workforce

Explore at:
Dataset updated
Sep 24, 2024
Dataset provided by
Harvard Dataverse
Authors
Wang, Jieshu; Kiran, Elif; S.R. Aurora (also known as Mai P. Trinh); Simeone, Michael; Lobo, José
Description

This repository contains two datasets used in the study exploring the impact of Generative AI, specifically ChatGPT, on the public sector workforce in the United States. The datasets provide detailed information on the core tasks of public sector occupations and their estimated performance metrics, including potential for automation and augmentation by ChatGPT. These estimations are generated by OpenAI’s GPT-4 model (GPT-4-1106-preview) through OpenAI API.

Search
Clear search
Close search
Google apps
Main menu