1 dataset found
  1. Data from 199 residents of the United States comparing reported trust in...

    • data.csiro.au
    Updated Jan 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melanie McGrath; Patrick Cooper; Andreas Duenser (2025). Data from 199 residents of the United States comparing reported trust in output generated by a large language model and output provided via a different form of artificial intelligence (January 2024) [Dataset]. http://doi.org/10.25919/9g2g-ws49
    Explore at:
    Dataset updated
    Jan 22, 2025
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Melanie McGrath; Patrick Cooper; Andreas Duenser
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 31, 2024
    Dataset funded by
    CSIROhttp://www.csiro.au/
    Description

    This data was collected in an experiment aiming to establish whether trust in large language models (LLMs) may be inflated in relation to other forms of artificial intelligence, with a particular focus on the content and forms of natural language used. One hundred and ninety-nine residents of the United States were recruited online and presented with a series of general knowledge questions. For each question they also received a recommendation from either an LLM or a non-LLM AI-assistant. The accuracy of this recommendation was also varied. All data is deidentified and there is no missing data. This deidentified data may be used by researchers for the purposes of verifying published results or advancing other research on this topic. Lineage: Data was collected on the Qualtrics survey platform from participants sourced on online recruitment platform, Prolific.

  2. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Melanie McGrath; Patrick Cooper; Andreas Duenser (2025). Data from 199 residents of the United States comparing reported trust in output generated by a large language model and output provided via a different form of artificial intelligence (January 2024) [Dataset]. http://doi.org/10.25919/9g2g-ws49
Organization logo

Data from 199 residents of the United States comparing reported trust in output generated by a large language model and output provided via a different form of artificial intelligence (January 2024)

Explore at:
Dataset updated
Jan 22, 2025
Dataset provided by
CSIROhttp://www.csiro.au/
Authors
Melanie McGrath; Patrick Cooper; Andreas Duenser
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Time period covered
Jan 31, 2024
Dataset funded by
CSIROhttp://www.csiro.au/
Description

This data was collected in an experiment aiming to establish whether trust in large language models (LLMs) may be inflated in relation to other forms of artificial intelligence, with a particular focus on the content and forms of natural language used. One hundred and ninety-nine residents of the United States were recruited online and presented with a series of general knowledge questions. For each question they also received a recommendation from either an LLM or a non-LLM AI-assistant. The accuracy of this recommendation was also varied. All data is deidentified and there is no missing data. This deidentified data may be used by researchers for the purposes of verifying published results or advancing other research on this topic. Lineage: Data was collected on the Qualtrics survey platform from participants sourced on online recruitment platform, Prolific.

Search
Clear search
Close search
Google apps
Main menu