100+ datasets found
  1. Large Language Models Comparison Dataset

    • kaggle.com
    zip
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samay Ashar (2025). Large Language Models Comparison Dataset [Dataset]. https://www.kaggle.com/datasets/samayashar/large-language-models-comparison-dataset
    Explore at:
    zip(5894 bytes)Available download formats
    Dataset updated
    Feb 24, 2025
    Authors
    Samay Ashar
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset provides a comparison of various Large Language Models (LLMs) based on their performance, cost, and efficiency. It includes important details like speed, latency, benchmarks, and pricing, helping users understand how different models stack up against each other.

    Key Details:

    • File Name: llm_comparison_dataset.csv
    • Size: 14.57 kB
    • Total Columns: 15
    • License: CC0 (Public Domain)

    What’s Inside?

    Here are some of the key metrics included in the dataset:

    1. Context Window: Maximum number of tokens the model can process at once.
    2. Speed (tokens/sec): How fast the model generates responses.
    3. Latency (sec): Time delay before the model responds.
    4. Benchmark Scores: Performance ratings from MMLU (academic tasks) and Chatbot Arena (real-world chatbot performance).
    5. Open-Source: Indicates if the model is publicly available or proprietary.
    6. Price per Million Tokens: The cost of using the model for one million tokens.
    7. Training Dataset Size: Amount of data used to train the model.
    8. Compute Power: Resources needed to run the model.
    9. Energy Efficiency: How much power the model consumes.

    This dataset is useful for researchers, developers, and AI enthusiasts who want to compare LLMs and choose the best one based on their needs.

    📌If you find this dataset useful, do give an upvote :)

  2. Large Language Model (LLM) Comparisons

    • kaggle.com
    zip
    Updated Aug 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dylan Karmin (2023). Large Language Model (LLM) Comparisons [Dataset]. https://www.kaggle.com/datasets/dylankarmin/llm-datasets-comparison
    Explore at:
    zip(2596 bytes)Available download formats
    Dataset updated
    Aug 20, 2023
    Authors
    Dylan Karmin
    Description

    This dataset involves popular large language models (LLMs) that are used for deep learning and the training of artificial intelligence. These LLMs have different uses and data, so I decided to summarize and share information about each LLM. Please give credit to the creators or managers of the LLMs if you decide to use them for any purpose.

  3. D

    Notable AI Models

    • epoch.ai
    csv
    Updated Aug 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Epoch AI (2025). Notable AI Models [Dataset]. https://epoch.ai/data/ai-models
    Explore at:
    csvAvailable download formats
    Dataset updated
    Aug 15, 2025
    Dataset authored and provided by
    Epoch AI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Global
    Variables measured
    https://epoch.ai/data/ai-models-documentation#records
    Measurement technique
    https://epoch.ai/data/ai-models-documentation#records
    Description

    Our most comprehensive database of AI models, containing over 800 models that are state of the art, highly cited, or otherwise historically notable. It tracks key factors driving machine learning progress and includes over 300 training compute estimates.

  4. Open-Source LLM Market Analysis, Size, and Forecast 2025-2029: North America...

    • technavio.com
    pdf
    Updated Jul 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Open-Source LLM Market Analysis, Size, and Forecast 2025-2029: North America (US, Canada, and Mexico), Europe (France, Germany, and UK), APAC (China, India, Japan, and South Korea), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/open-source-llm-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 10, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Area covered
    Germany, Canada, United States, United Kingdom
    Description

    Snapshot img

    Open-Source LLM Market Size 2025-2029

    The open-source LLM market size is valued to increase by USD 54 billion, at a CAGR of 33.7% from 2024 to 2029. Increasing democratization and compelling economics will drive the open-source LLM market.

    Market Insights

    North America dominated the market and accounted for a 37% growth during the 2025-2029.
    By Application - Technology and software segment was valued at USD 4.02 billion in 2023
    By Deployment - On-premises segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 575.60 million 
    Market Future Opportunities 2024: USD 53995.50 million
    CAGR from 2024 to 2029 : 33.7%
    

    Market Summary

    The Open-Source Large Language Model (LLM) market has experienced significant growth due to the increasing democratization of artificial intelligence (AI) technology and its compelling economics. This global trend is driven by the proliferation of smaller organizations seeking to leverage advanced language models for various applications, including supply chain optimization, compliance, and operational efficiency. Open-source LLMs offer several advantages over proprietary models. They provide greater flexibility, as users can modify and adapt the models to their specific needs. Additionally, open-source models often have larger training datasets, leading to improved performance and accuracy. However, there are challenges to implementing open-source LLMs, such as the prohibitive computational costs and critical hardware dependency. These obstacles necessitate the development of more efficient algorithms and the exploration of cloud computing solutions.
    A real-world business scenario illustrates the potential benefits of open-source LLMs. A manufacturing company aims to optimize its supply chain by implementing an AI-powered system to analyze customer demand patterns and predict inventory needs. The company chooses an open-source LLM due to its flexibility and cost-effectiveness. By integrating the LLM into its supply chain management system, the company can improve forecasting accuracy and reduce inventory costs, ultimately increasing operational efficiency and customer satisfaction. Despite the challenges, the market continues to grow as organizations recognize the potential benefits of advanced language models. The democratization of AI technology and the compelling economics of open-source solutions make them an attractive option for businesses of all sizes.
    

    What will be the size of the Open-Source LLM Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    The Open-Source Large Language Model (LLM) Market continues to evolve, offering businesses innovative solutions for various applications. One notable trend is the increasing adoption of explainable AI (XAI) methods in LLMs. XAI models provide transparency into the reasoning behind their outputs, addressing concerns around bias mitigation and interpretability. This transparency is crucial for industries with stringent compliance requirements, such as finance and healthcare. For instance, a recent study reveals that companies implementing XAI models have achieved a 25% increase in model acceptance rates among stakeholders, leading to more informed decisions. This improvement can significantly impact product strategy and budgeting, as businesses can confidently invest in AI solutions that align with their ethical and regulatory standards.
    Moreover, advancements in LLM architecture include encoder-decoder architectures, multi-head attention, and self-attention layers, which enhance feature extraction and model scalability. These improvements contribute to better performance and more accurate results, making LLMs an essential tool for businesses seeking to optimize their operations and gain a competitive edge. In summary, the market is characterized by continuous innovation and a strong focus on delivering human-centric solutions. The adoption of explainable AI methods and advancements in neural network architecture are just a few examples of how businesses can benefit from these technologies. By investing in Open-Source LLMs, organizations can improve efficiency, enhance decision-making, and maintain a responsible approach to AI implementation.
    

    Unpacking the Open-Source LLM Market Landscape

    In the dynamic landscape of large language models (LLMs), open-source solutions have gained significant traction, offering businesses competitive advantages through data augmentation and few-shot learning capabilities. Compared to traditional models, open-source LLMs enable a 30% reduction in optimizer selection time and a 25% improvement in model accuracy for summarization tasks. Furthermore, distributed training and model compression techniques allow businesses to process larger training dataset sizes with minimal tokenization process disruptions, result

  5. f

    Improving Bayesian Local Spatial Models in Large Datasets

    • datasetcatalog.nlm.nih.gov
    • tandf.figshare.com
    Updated Oct 15, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Castruccio, Stefano; Genton, Marc G.; Rue, Håvard; Lenzi, Amanda (2020). Improving Bayesian Local Spatial Models in Large Datasets [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000531733
    Explore at:
    Dataset updated
    Oct 15, 2020
    Authors
    Castruccio, Stefano; Genton, Marc G.; Rue, Håvard; Lenzi, Amanda
    Description

    Environmental processes resolved at a sufficiently small scale in space and time inevitably display nonstationary behavior. Such processes are both challenging to model and computationally expensive when the data size is large. Instead of modeling the global non-stationarity explicitly, local models can be applied to disjoint regions of the domain. The choice of the size of these regions is dictated by a bias-variance trade-off; large regions will have smaller variance and larger bias, whereas small regions will have higher variance and smaller bias. From both the modeling and computational point of view, small regions are preferable to better accommodate the non-stationarity. However, in practice, large regions are necessary to control the variance. We propose a novel Bayesian three-step approach that allows for smaller regions without compromising the increase of the variance that would follow. We are able to propagate the uncertainty from one step to the next without issues caused by reusing the data. The improvement in inference also results in improved prediction, as our simulated example shows. We illustrate this new approach on a dataset of simulated high-resolution wind speed data over Saudi Arabia. Supplemental files for this article are available online.

  6. Foundation Model Data Collection and Data Annotation | Large Language...

    • datarade.ai
    Updated Jan 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2024). Foundation Model Data Collection and Data Annotation | Large Language Model(LLM) Data | SFT Data| Red Teaming Services [Dataset]. https://datarade.ai/data-products/nexdata-foundation-model-data-solutions-llm-sft-rhlf-nexdata
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Jan 25, 2024
    Dataset authored and provided by
    Nexdata
    Area covered
    Ireland, Czech Republic, Taiwan, Portugal, Malta, Azerbaijan, El Salvador, Kyrgyzstan, Spain, Russian Federation
    Description
    1. Overview
    2. Unsupervised Learning: For the training data required in unsupervised learning, Nexdata delivers data collection and cleaning services for both single-modal and cross-modal data. We provide Large Language Model(LLM) Data cleaning and personnel support services based on the specific data types and characteristics of the client's domain.

    -SFT: Nexdata assists clients in generating high-quality supervised fine-tuning data for model optimization through prompts and outputs annotation.

    -Red teaming: Nexdata helps clients train and validate models through drafting various adversarial attacks, such as exploratory or potentially harmful questions. Our red team capabilities help clients identify problems in their models related to hallucinations, harmful content, false information, discrimination, language bias and etc.

    -RLHF: Nexdata assist clients in manually ranking multiple outputs generated by the SFT-trained model according to the rules provided by the client, or provide multi-factor scoring. By training annotators to align with values and utilizing a multi-person fitting approach, the quality of feedback can be improved.

    1. Our Capacity -Global Resources: Global resources covering hundreds of languages worldwide

    -Compliance: All the Large Language Model(LLM) Data is collected with proper authorization

    -Quality: Multiple rounds of quality inspections ensures high quality data output

    -Secure Implementation: NDA is signed to gurantee secure implementation and data is destroyed upon delivery.

    -Efficency: Our platform supports human-machine interaction and semi-automatic labeling, increasing labeling efficiency by more than 30% per annotator. It has successfully been applied to nearly 5,000 projects.

    3.About Nexdata Nexdata is equipped with professional data collection devices, tools and environments, as well as experienced project managers in data collection and quality control, so that we can meet the Large Language Model(LLM) Data collection requirements in various scenarios and types. We have global data processing centers and more than 20,000 professional annotators, supporting on-demand Large Language Model(LLM) Data annotation services, such as speech, image, video, point cloud and Natural Language Processing (NLP) Data, etc. Please visit us at https://www.nexdata.ai/?source=Datarade

  7. m

    Large Language Models (LLMs) Statistics and Facts

    • market.biz
    Updated Oct 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market.biz (2025). Large Language Models (LLMs) Statistics and Facts [Dataset]. https://market.biz/large-language-models-llms-statistics/
    Explore at:
    Dataset updated
    Oct 9, 2025
    Dataset provided by
    Market.biz
    License

    https://market.biz/privacy-policyhttps://market.biz/privacy-policy

    Time period covered
    2022 - 2032
    Area covered
    South America, Australia, North America, ASIA, Europe, Africa
    Description

    Introduction

    Large Language Models (LLMs) Statistics: are sophisticated AI systems built on deep learning, particularly transformer-based architectures, designed to analyze and generate human-like text by identifying statistical patterns within vast datasets. Their core functionality is grounded in probability distributions, enabling precise language prediction and contextual comprehension.

    Performance and efficiency are largely determined by metrics such as perplexity, cross-entropy loss, dataset scale, and the number of parameters. With some models incorporating hundreds of billions of parameters, LLMs require immense computational resources and advanced optimization strategies, while statistical benchmarks remain central for evaluating accuracy and coherence.

    Beyond the technical scope, LLM statistics also capture adoption trends, enterprise integration, and productivity impacts, with growing attention directed toward transparency, fairness, and bias assessment to ensure responsible AI development.

  8. Data from: Large Language Model

    • zenodo.org
    application/gzip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gregory Diamos; Mostofa; Gregory Diamos; Mostofa (2020). Large Language Model [Dataset]. http://doi.org/10.5281/zenodo.1492880
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gregory Diamos; Mostofa; Gregory Diamos; Mostofa
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    One of the large language models trained in this paper: https://arxiv.org/abs/1810.10045

  9. d

    The Convergence of High Performance Computing, Big Data, and Machine...

    • catalog.data.gov
    • s.cnmilf.com
    Updated May 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NCO NITRD (2025). The Convergence of High Performance Computing, Big Data, and Machine Learning: Summary of the Big Data and High End Computing Interagency Working Groups Joint Workshop [Dataset]. https://catalog.data.gov/dataset/the-convergence-of-high-performance-computing-big-data-and-machine-learning-summary-of-the
    Explore at:
    Dataset updated
    May 14, 2025
    Dataset provided by
    NCO NITRD
    Description

    The high performance computing (HPC) and big data (BD) communities traditionally have pursued independent trajectories in the world of computational science. HPC has been synonymous with modeling and simulation, and BD with ingesting and analyzing data from diverse sources, including from simulations. However, both communities are evolving in response to changing user needs and technological landscapes. Researchers are increasingly using machine learning (ML) not only for data analytics but also for modeling and simulation; science-based simulations are increasingly relying on embedded ML models not only to interpret results from massive data outputs but also to steer computations. Science-based models are being combined with data-driven models to represent complex systems and phenomena. There also is an increasing need for real-time data analytics, which requires large-scale computations to be performed closer to the data and data infrastructures, to adapt to HPC-like modes of operation. These new use cases create a vital need for HPC and BD systems to deal with simulations and data analytics in a more unified fashion. To explore this need, the NITRD Big Data and High-End Computing R&D Interagency Working Groups held a workshop, The Convergence of High-Performance Computing, Big Data, and Machine Learning, on October 29-30, 2018, in Bethesda, Maryland. The purposes of the workshop were to bring together representatives from the public, private, and academic sectors to share their knowledge and insights on integrating HPC, BD, and ML systems and approaches and to identify key research challenges and opportunities. The 58 workshop participants represented a balanced cross-section of stakeholders involved in or impacted by this area of research. Additional workshop information, including a webcast, is available at https://www.nitrd.gov/nitrdgroups/index.php?title=HPC-BD-Convergence.

  10. Data from: Supplemental material for: Software System Testing assisted by...

    • zenodo.org
    • portalinvestigacion.uniovi.es
    • +1more
    zip
    Updated Jul 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cristian Augusto; Cristian Augusto; Jesús Morán Barbón; Jesús Morán Barbón; Antonia Bertolino; Antonia Bertolino; Claudio de la Riva Alvarez; Claudio de la Riva Alvarez; Javier Tuya; Javier Tuya (2025). Supplemental material for: Software System Testing assisted by Large Language Models: An Exploratory Study [Dataset]. http://doi.org/10.5281/zenodo.13761150
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 22, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Cristian Augusto; Cristian Augusto; Jesús Morán Barbón; Jesús Morán Barbón; Antonia Bertolino; Antonia Bertolino; Claudio de la Riva Alvarez; Claudio de la Riva Alvarez; Javier Tuya; Javier Tuya
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the supplemental material of the paper titled as “Software System Testing Assisted by Large Language Models: An Exploratory Study” presented at the 36th International Conference on Testing Software and Systems.

    It contains the raw execution data generated by both models, GPT-4o and GPT-4omini, during the exploratory study. The supplementary material includes the following files:

    • GPT-4ominiRQ1-2ExecutionData.zip: contains the JSON outputs from the OpenAI API for the GPT-4o mini model. Each output is labeled according to the research question number and the corresponding timestamp (for RQ1) or the requested test case (for RQ2), all provided in plain text format.
    • GPT-4oRQ1-2ExecutionData.zip: contains the JSON outputs from the OpenAI API for the GPT-4o model. Like the previous file, each output is named in plain text format based on the research question number and timestamp (for RQ1) or the requested test case (for RQ2).

    To cite this work:

    C. Augusto, J. Morán, A. Bertolino, C. de la Riva and J. Tuya, “Software System Testing assisted by Large Language Models: An Exploratory Study”, in Testing Software and Systems (pp. 239–255). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-80889-0_17

  11. Big Data Market Analysis, Size, and Forecast 2025-2029: North America (US...

    • technavio.com
    pdf
    Updated Jun 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Big Data Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, and UK), APAC (Australia, China, India, Japan, and South Korea), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/big-data-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 7, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Description

    Snapshot img

    Big Data Market Size 2025-2029

    The big data market size is valued to increase USD 193.2 billion, at a CAGR of 13.3% from 2024 to 2029. Surge in data generation will drive the big data market.

    Major Market Trends & Insights

    APAC dominated the market and accounted for a 36% growth during the forecast period.
    By Deployment - On-premises segment was valued at USD 55.30 billion in 2023
    By Type - Services segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 193.04 billion
    Market Future Opportunities: USD 193.20 billion
    CAGR from 2024 to 2029 : 13.3%
    

    Market Summary

    In the dynamic realm of business intelligence, the market continues to expand at an unprecedented pace. According to recent estimates, this market is projected to reach a value of USD 274.3 billion by 2022, underscoring its significant impact on modern industries. This growth is driven by several factors, including the increasing volume, variety, and velocity of data generation. Moreover, the adoption of advanced technologies, such as machine learning and artificial intelligence, is enabling businesses to derive valuable insights from their data. Another key trend is the integration of blockchain solutions into big data implementation, enhancing data security and trust.
    However, this rapid expansion also presents challenges, such as ensuring data privacy and security, managing data complexity, and addressing the skills gap. Despite these challenges, the future of the market looks promising, with continued innovation and investment in data analytics and management solutions. As businesses increasingly rely on data to drive decision-making and gain a competitive edge, the importance of effective big data strategies will only grow.
    

    What will be the Size of the Big Data Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    How is the Big Data Market Segmented?

    The big data industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Deployment
    
      On-premises
      Cloud-based
      Hybrid
    
    
    Type
    
      Services
      Software
    
    
    End-user
    
      BFSI
      Healthcare
      Retail and e-commerce
      IT and telecom
      Others
    
    
    Geography
    
      North America
    
        US
        Canada
    
    
      Europe
    
        France
        Germany
        UK
    
    
      APAC
    
        Australia
        China
        India
        Japan
        South Korea
    
    
      Rest of World (ROW)
    

    By Deployment Insights

    The on-premises segment is estimated to witness significant growth during the forecast period.

    In the ever-evolving landscape of data management, the market continues to expand with innovative technologies and solutions. On-premises big data software deployment, a popular choice for many organizations, offers control over hardware and software functions. Despite the high upfront costs for hardware purchases, it eliminates recurring monthly payments, making it a cost-effective alternative for some. However, cloud-based deployment, with its ease of access and flexibility, is increasingly popular, particularly for businesses dealing with high-velocity data ingestion. Cloud deployment, while convenient, comes with its own challenges, such as potential security breaches and the need for companies to manage their servers.

    On-premises solutions, on the other hand, provide enhanced security and control, but require significant capital expenditure. Advanced analytics platforms, such as those employing deep learning models, parallel processing, and machine learning algorithms, are transforming data processing and analysis. Metadata management, data lineage tracking, and data versioning control are crucial components of these solutions, ensuring data accuracy and reliability. Data integration platforms, including IoT data integration and ETL process optimization, are essential for seamless data flow between systems. Real-time analytics, data visualization tools, and business intelligence dashboards enable organizations to make data-driven decisions. Data encryption methods, distributed computing, and data lake architectures further enhance data security and scalability.

    Request Free Sample

    The On-premises segment was valued at USD 55.30 billion in 2019 and showed a gradual increase during the forecast period.

    With the integration of AI-powered insights, natural language processing, and predictive modeling, businesses can unlock valuable insights from their data, improving operational efficiency and driving growth. A recent study reveals that the market is projected to reach USD 274.3 billion by 2022, underscoring its growing importance in today's data-driven economy. This continuous evolution of big data technologies and solutions underscores the need for robust data governa

  12. Data from: A Toolbox for Surfacing Health Equity Harms and Biases in Large...

    • springernature.figshare.com
    application/csv
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephen R. Pfohl; Heather Cole-Lewis; Rory Sayres; Darlene Neal; Mercy Asiedu; Awa Dieng; Nenad Tomasev; Qazi Mamunur Rashid; Shekoofeh Azizi; Negar Rostamzadeh; Liam G. McCoy; Leo Anthony Celi; Yun Liu; Mike Schaekermann; Alanna Walton; Alicia Parrish; Chirag Nagpal; Preeti Singh; Akeiylah Dewitt; Philip Mansfield; Sushant Prakash; Katherine Heller; Alan Karthikesalingam; Christopher Semturs; Joëlle K. Barral; Greg Corrado; Yossi Matias; Jamila Smith-Loud; Ivor B. Horn; Karan Singhal (2024). A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models [Dataset]. http://doi.org/10.6084/m9.figshare.26133973.v1
    Explore at:
    application/csvAvailable download formats
    Dataset updated
    Sep 24, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Stephen R. Pfohl; Heather Cole-Lewis; Rory Sayres; Darlene Neal; Mercy Asiedu; Awa Dieng; Nenad Tomasev; Qazi Mamunur Rashid; Shekoofeh Azizi; Negar Rostamzadeh; Liam G. McCoy; Leo Anthony Celi; Yun Liu; Mike Schaekermann; Alanna Walton; Alicia Parrish; Chirag Nagpal; Preeti Singh; Akeiylah Dewitt; Philip Mansfield; Sushant Prakash; Katherine Heller; Alan Karthikesalingam; Christopher Semturs; Joëlle K. Barral; Greg Corrado; Yossi Matias; Jamila Smith-Loud; Ivor B. Horn; Karan Singhal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary material and data for Pfohl and Cole-Lewis et al., "A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models" (2024).

    We include the sets of adversarial questions for each of the seven EquityMedQA datasets (OMAQ, EHAI, FBRT-Manual, FBRT-LLM, TRINDS, CC-Manual, and CC-LLM), the three other non-EquityMedQA datasets used in this work (HealthSearchQA, Mixed MMQA-OMAQ, and Omiye et al.), as well as the data generated as a part of the empirical study, including the generated model outputs (Med-PaLM 2 [1] primarily, with Med-PaLM [2] answers for pairwise analyses) and ratings from human annotators (physicians, health equity experts, and consumers). See the paper for details on all datasets.

    We include other datasets evaluated in this work: HealthSearchQA [2], Mixed MMQA-OMAQ, and Omiye et al [3].

    • Mixed MMQA-OMAQ is composed of the 140 question subset of MultiMedQA questions described in [1,2] with an additional 100 questions from OMAQ (described below). The 140 MultiMedQA questions are composed of 100 from HealthSearchQA, 20 from LiveQA [4], and 20 from MedicationQA [5]. In the data presented here, we do not reproduce the text of the questions from LiveQA and MedicationQA. For LiveQA, we instead use identifier that correspond to those presented in the original dataset. For MedicationQA, we designate "MedicationQA_N" to refer to the N-th row of MedicationQA (0-indexed).

    A limited number of data elements described in the paper are not included here. The following elements are excluded:

    1. The reference answers written by physicians to HealthSearchQA questions, introduced in [2], and the set of corresponding pairwise ratings. This accounts for 2,122 rated instances.

    2. The free-text comments written by raters during the ratings process.

    3. Demographic information associated with the consumer raters (only age group information is included).

    References

    1. Singhal, K., et al. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617 (2023).

    2. Singhal, K., Azizi, S., Tu, T. et al. Large language models encode clinical knowledge. Nature 620, 172–180 (2023). https://doi.org/10.1038/s41586-023-06291-2

    3. Omiye, J.A., Lester, J.C., Spichak, S. et al. Large language models propagate race-based medicine. npj Digit. Med. 6, 195 (2023). https://doi.org/10.1038/s41746-023-00939-z

    4. Abacha, Asma Ben, et al. "Overview of the medical question answering task at TREC 2017 LiveQA." TREC. 2017.

    5. Abacha, Asma Ben, et al. "Bridging the gap between consumers’ medication questions and trusted answers." MEDINFO 2019: Health and Wellbeing e-Networks for All. IOS Press, 2019. 25-29.

    Description of files and sheets

    1. Independent Ratings [ratings_independent.csv]: Contains ratings of the presence of bias and its dimensions in Med-PaLM 2 outputs using the independent assessment rubric for each of the datasets studied. The primary response regarding the presence of bias is encoded in the column bias_presence with three possible values (No bias, Minor bias, Severe bias). Binary assessments of the dimensions of bias are encoded in separate columns (e.g., inaccuracy_for_some_axes). Instances for the Mixed MMQA-OMAQ dataset are triple-rated for each rater group; other datasets are single-rated. Instances were missing for five instances in MMQA-OMAQ and two instances in CC-Manual. This file contains 7,519 rated instances.

    2. Paired Ratings [ratings_pairwise.csv]: Contains comparisons of the presence or degree of bias and its dimensions in Med-PaLM and Med-PaLM 2 outputs for each of the datasets studied. Pairwise responses are encoded in terms of two binary columns corresponding to which of the answers was judged to contain a greater degree of bias (e.g., Med-PaLM-2_answer_more_bias). Dimensions of bias are encoded in the same way as for ratings_independent.csv. Instances for the Mixed MMQA-OMAQ dataset are triple-rated for each rater group; other datasets are single-rated. Four ratings were missing (one for EHAI, two for FRT-Manual, one for FBRT-LLM). This file contains 6,446 rated instances.

    3. Counterfactual Paired Ratings [ratings_counterfactual.csv]: Contains ratings under the counterfactual rubric for pairs of questions defined in the CC-Manual and CC-LLM datasets. Contains a binary assessment of the presence of bias (bias_presence), columns for each dimension of bias, and categorical columns corresponding to other elements of the rubric (ideal_answers_diff, how_answers_diff). Instances for the CC-Manual dataset are triple-rated, instances for CC-LLM are single-rated. Due to a data processing error, we removed questions that refer to `Natal'' from the analysis of the counterfactual rubric on the CC-Manual dataset. This affects three questions (corresponding to 21 pairs) derived from one seed question based on the TRINDS dataset. This file contains 1,012 rated instances.

    4. Open-ended Medical Adversarial Queries (OMAQ) [equitymedqa_omaq.csv]: Contains questions that compose the OMAQ dataset. The OMAQ dataset was first described in [1].

    5. Equity in Health AI (EHAI) [equitymedqa_ehai.csv]: Contains questions that compose the EHAI dataset.

    6. Failure-Based Red Teaming - Manual (FBRT-Manual) [equitymedqa_fbrt_manual.csv]: Contains questions that compose the FBRT-Manual dataset.

    7. Failure-Based Red Teaming - LLM (FBRT-LLM); full [equitymedqa_fbrt_llm.csv]: Contains questions that compose the extended FBRT-LLM dataset.

    8. Failure-Based Red Teaming - LLM (FBRT-LLM) [equitymedqa_fbrt_llm_661_sampled.csv]: Contains questions that compose the sampled FBRT-LLM dataset used in the empirical study.

    9. TRopical and INfectious DiseaseS (TRINDS) [equitymedqa_trinds.csv]: Contains questions that compose the TRINDS dataset.

    10. Counterfactual Context - Manual (CC-Manual) [equitymedqa_cc_manual.csv]: Contains pairs of questions that compose the CC-Manual dataset.

    11. Counterfactual Context - LLM (CC-LLM) [equitymedqa_cc_llm.csv]: Contains pairs of questions that compose the CC-LLM dataset.

    12. HealthSearchQA [other_datasets_healthsearchqa.csv]: Contains questions sampled from the HealthSearchQA dataset [1,2].

    13. Mixed MMQA-OMAQ [other_datasets_mixed_mmqa_omaq]: Contains questions that compose the Mixed MMQA-OMAQ dataset.

    14. Omiye et al. [other datasets_omiye_et_al]: Contains questions proposed in Omiye et al. [3].

    Version history

    Version 2: Updated to include ratings and generated model outputs. Dataset files were updated to include unique ids associated with each question. Version 1: Contained datasets of questions without ratings. Consistent with v1 available as a preprint on Arxiv (https://arxiv.org/abs/2403.12025)

    WARNING: These datasets contain adversarial questions designed specifically to probe biases in AI systems. They can include human-written and model-generated language and content that may be inaccurate, misleading, biased, disturbing, sensitive, or offensive.

    NOTE: the content of this research repository (i) is not intended to be a medical device; and (ii) is not intended for clinical use of any kind, including but not limited to diagnosis or prognosis.

  13. Analytics models used by Big data and BI companies in Italy 2017, by type

    • statista.com
    Updated Nov 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Analytics models used by Big data and BI companies in Italy 2017, by type [Dataset]. https://www.statista.com/statistics/697274/analytics-models-used-by-big-data-and-bi-companies-in-italy-by-type/
    Explore at:
    Dataset updated
    Nov 28, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2017
    Area covered
    Italy
    Description

    This statistic illustrates the share of analytics models used by Big data and Business Intelligence (BI) companies in Italy in 2017. That year, all the interviewed companies reported using descriptive analytics, while only ****** percent used automated analytics.

  14. Large Language Model Market Size, Growth & Outlook | Industry Report 2030

    • mordorintelligence.com
    pdf,excel,csv,ppt
    Updated Jun 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mordor Intelligence (2025). Large Language Model Market Size, Growth & Outlook | Industry Report 2030 [Dataset]. https://www.mordorintelligence.com/industry-reports/large-language-model-llm-market
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    Jun 22, 2025
    Dataset authored and provided by
    Mordor Intelligence
    License

    https://www.mordorintelligence.com/privacy-policyhttps://www.mordorintelligence.com/privacy-policy

    Time period covered
    2019 - 2030
    Area covered
    Global
    Description

    The Large Language Model Market Report is Segmented by Offering (Software Platforms and Frameworks, and More). Deployment (Cloud, and More), Model Size (Less Than 7 B Parameters, and More), Modality (Text, Code, and More), Application (Chatbots and Virtual Assistants, and More), End-User Industry (BFSI, and More), and Geography (North America, Europe, Asia, and More). The Market Forecasts are Provided in Terms of Value (USD).

  15. Z

    A dataset to investigate ChatGPT for enhancing Students' Learning Experience...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Schicchi, Daniele; Taibi, Davide (2024). A dataset to investigate ChatGPT for enhancing Students' Learning Experience via Concept Maps [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12076680
    Explore at:
    Dataset updated
    Jun 19, 2024
    Dataset provided by
    Institute for Educational Technology, National Research Council of Italy
    Authors
    Schicchi, Daniele; Taibi, Davide
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was compiled to examine the use of ChatGPT 3.5 in educational settings, particularly for creating and personalizing concept maps. The data has been organized into three folders: Maps, Texts, and Questionnaires. The Maps folder contains the graphical representation of the concept maps and the PlanUML code for drawing them in Italian and English. The Texts folder contains the source text used as input for the map's creation The Questionnaires folder includes the students' responses to the three administered questionnaires.

  16. Data Science Platform Market Analysis, Size, and Forecast 2025-2029: North...

    • technavio.com
    pdf
    Updated Feb 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Data Science Platform Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, UK), APAC (China, India, Japan), South America (Brazil), and Middle East and Africa (UAE) [Dataset]. https://www.technavio.com/report/data-science-platform-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Feb 8, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Area covered
    United States
    Description

    Snapshot img

    Data Science Platform Market Size 2025-2029

    The data science platform market size is valued to increase USD 763.9 million, at a CAGR of 40.2% from 2024 to 2029. Integration of AI and ML technologies with data science platforms will drive the data science platform market.

    Major Market Trends & Insights

    North America dominated the market and accounted for a 48% growth during the forecast period.
    By Deployment - On-premises segment was valued at USD 38.70 million in 2023
    By Component - Platform segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 1.00 million
    Market Future Opportunities: USD 763.90 million
    CAGR : 40.2%
    North America: Largest market in 2023
    

    Market Summary

    The market represents a dynamic and continually evolving landscape, underpinned by advancements in core technologies and applications. Key technologies, such as machine learning and artificial intelligence, are increasingly integrated into data science platforms to enhance predictive analytics and automate data processing. Additionally, the emergence of containerization and microservices in data science platforms enables greater flexibility and scalability. However, the market also faces challenges, including data privacy and security risks, which necessitate robust compliance with regulations.
    According to recent estimates, the market is expected to account for over 30% of the overall big data analytics market by 2025, underscoring its growing importance in the data-driven business landscape.
    

    What will be the Size of the Data Science Platform Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    How is the Data Science Platform Market Segmented and what are the key trends of market segmentation?

    The data science platform industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Deployment
    
      On-premises
      Cloud
    
    
    Component
    
      Platform
      Services
    
    
    End-user
    
      BFSI
      Retail and e-commerce
      Manufacturing
      Media and entertainment
      Others
    
    
    Sector
    
      Large enterprises
      SMEs
    
    
    Application
    
      Data Preparation
      Data Visualization
      Machine Learning
      Predictive Analytics
      Data Governance
      Others
    
    
    Geography
    
      North America
    
        US
        Canada
    
    
      Europe
    
        France
        Germany
        UK
    
    
      Middle East and Africa
    
        UAE
    
    
      APAC
    
        China
        India
        Japan
    
    
      South America
    
        Brazil
    
    
      Rest of World (ROW)
    

    By Deployment Insights

    The on-premises segment is estimated to witness significant growth during the forecast period.

    In the dynamic and evolving the market, big data processing is a key focus, enabling advanced model accuracy metrics through various data mining methods. Distributed computing and algorithm optimization are integral components, ensuring efficient handling of large datasets. Data governance policies are crucial for managing data security protocols and ensuring data lineage tracking. Software development kits, model versioning, and anomaly detection systems facilitate seamless development, deployment, and monitoring of predictive modeling techniques, including machine learning algorithms, regression analysis, and statistical modeling. Real-time data streaming and parallelized algorithms enable real-time insights, while predictive modeling techniques and machine learning algorithms drive business intelligence and decision-making.

    Cloud computing infrastructure, data visualization tools, high-performance computing, and database management systems support scalable data solutions and efficient data warehousing. ETL processes and data integration pipelines ensure data quality assessment and feature engineering techniques. Clustering techniques and natural language processing are essential for advanced data analysis. The market is witnessing significant growth, with adoption increasing by 18.7% in the past year, and industry experts anticipate a further expansion of 21.6% in the upcoming period. Companies across various sectors are recognizing the potential of data science platforms, leading to a surge in demand for scalable, secure, and efficient solutions.

    API integration services and deep learning frameworks are gaining traction, offering advanced capabilities and seamless integration with existing systems. Data security protocols and model explainability methods are becoming increasingly important, ensuring transparency and trust in data-driven decision-making. The market is expected to continue unfolding, with ongoing advancements in technology and evolving business needs shaping its future trajectory.

    Request Free Sample

    The On-premises segment was valued at USD 38.70 million in 2019 and showed

  17. Medical large language model fine-tuning dataset

    • kaggle.com
    Updated May 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Krens (2024). Medical large language model fine-tuning dataset [Dataset]. https://www.kaggle.com/datasets/jickymen/medical-large-language-model-fine-tuning
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 28, 2024
    Dataset provided by
    Kaggle
    Authors
    Krens
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Description

    This dataset is designed for fine-tuning large language models in the medical domain. It consists of a series of conversations between users (patients) and assistants (doctors). Each conversation centers around a specific medical topic, such as gynecology, male dysfunction, erectile dysfunction, endocrinology, internal medicine, hepatology, etc.

    Dataset Background

    • Source and Inspiration:Although the real doctor-patient communication data collected from the Internet and hospitals conforms to the doctor's style, it is too noisy and difficult to clean. The data obtained through large model distillation is easy to understand, but may cause ‘model collapse’.The dataset comes from the consultation and communication between patients and doctors in the real world and the data generated from the dialogue with LLM. By mixing the two in a certain proportion and cleaning them, the fine-tuning effect can be better.
    • Data Type: The dataset includes dialogue data where users present health issues and doctors provide advice, covering multiple medical specialties.

    Dataset Examples

    Each conversation typically includes the following components: 1. System Prompt: Provides the doctor's specialization, e.g., "You are a doctor specializing in gynecology." 2. User Query: The patient describes symptoms or asks health-related questions. 3. Doctor's Response: The doctor offers advice and a diagnostic plan based on the user's query.

    By using such dialogue datasets, language models can better understand and generate medical-related text, providing more accurate and useful advice.

  18. L

    Large-Scale Model Training Machine Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Mar 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Large-Scale Model Training Machine Report [Dataset]. https://www.datainsightsmarket.com/reports/large-scale-model-training-machine-41601
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Mar 16, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Large-Scale Model Training Machine market is experiencing explosive growth, fueled by the increasing demand for advanced artificial intelligence (AI) applications across diverse sectors. The market, estimated at $15 billion in 2025, is projected to witness a robust Compound Annual Growth Rate (CAGR) of 25% from 2025 to 2033, reaching an estimated $75 billion by 2033. This surge is driven by several factors, including the proliferation of big data, advancements in deep learning algorithms, and the growing need for efficient model training in applications such as natural language processing (NLP), computer vision, and recommendation systems. Key market segments include the Internet, telecommunications, and government sectors, which are heavily investing in AI infrastructure to enhance their services and operational efficiency. The CPU+GPU segment dominates the market due to its superior performance in handling complex computations required for large-scale model training. Leading companies like Google, Amazon, Microsoft, and NVIDIA are at the forefront of innovation, constantly developing more powerful hardware and software solutions to address the evolving needs of this rapidly expanding market. The market's growth trajectory is shaped by several trends. The increasing adoption of cloud-based solutions for model training is significantly lowering the barrier to entry for smaller companies. Simultaneously, the development of specialized hardware like Tensor Processing Units (TPUs) and Field-Programmable Gate Arrays (FPGAs) is further optimizing performance and reducing costs. Despite this positive outlook, challenges remain. High infrastructure costs, the complexity of managing large datasets, and the shortage of skilled AI professionals are significant restraints on the market's expansion. However, ongoing technological advancements and increased investment in AI research are expected to mitigate these challenges, paving the way for sustained growth in the Large-Scale Model Training Machine market. Regional analysis indicates North America and Asia Pacific (particularly China) as the leading markets, with strong growth anticipated in other regions as AI adoption accelerates globally.

  19. L

    Large AI Model Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Nov 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Large AI Model Report [Dataset]. https://www.datainsightsmarket.com/reports/large-ai-model-1390488
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Nov 4, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global Large AI Model market is poised for significant expansion, projected to reach an estimated market size of approximately $45 billion by 2025. This impressive growth is fueled by a compound annual growth rate (CAGR) of roughly 28%, indicating a robust and sustained upward trajectory. The market is characterized by a dynamic interplay of transformative applications across diverse sectors, with Education, Energy, Automotive, and Medical leading the charge in AI model adoption. Foundation models are at the core of this revolution, with Natural Language Processing (NLP) and Computer Vision (CV) models driving innovation and opening new frontiers for intelligent systems. The increasing demand for sophisticated AI capabilities in these industries, coupled with advancements in computational power and data availability, are the primary growth drivers. Organizations are increasingly leveraging large AI models for tasks ranging from personalized learning experiences and optimized energy management to advanced driver-assistance systems and groundbreaking medical diagnostics. Further augmenting this growth are the ongoing advancements in Multimodal Foundation Models, which are capable of processing and understanding information from various data types, thereby offering more comprehensive and context-aware AI solutions. While the market exhibits immense potential, certain restraints, such as the significant computational resources and specialized expertise required for developing and deploying these models, alongside ethical considerations and regulatory hurdles, need to be carefully navigated. However, the proactive efforts of key industry players like OpenAI, Microsoft, Google, NVIDIA, and others in pushing the boundaries of AI research and development are expected to overcome these challenges. The competitive landscape is intense, with major technology giants investing heavily in AI research and infrastructure, fostering an environment of rapid innovation and market expansion throughout the forecast period of 2025-2033. The Asia Pacific region, particularly China, is emerging as a significant market, driven by strong government support and a burgeoning tech ecosystem. This comprehensive report delves into the dynamic landscape of Large AI Models, offering unparalleled insights into market evolution, technological advancements, and strategic imperatives. Spanning a study period from 2019 to 2033, with a base and estimated year of 2025, this analysis provides a robust understanding of the market dynamics, historical performance, and future projections for the forecast period of 2025-2033, building upon the historical period of 2019-2024.

  20. d

    Replication Data for: Large Language Models as a Substitute for Human...

    • search.dataone.org
    Updated Mar 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Heseltine, Michael (2024). Replication Data for: Large Language Models as a Substitute for Human Experts in Annotating Political Text [Dataset]. http://doi.org/10.7910/DVN/V2P6YL
    Explore at:
    Dataset updated
    Mar 6, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Heseltine, Michael
    Description

    Large-scale text analysis has grown rapidly as a method in political science and beyond. To date, text-as-data methods rely on large volumes of human-annotated training examples, which places a premium on researcher resources. However, advances in large language models (LLMs) may make automated annotation increasingly viable. This paper tests the performance of GPT-4 across a range of scenarios relevant for analysis of political text. We compare GPT-4 coding with human expert coding of tweets and news articles across four variables (whether text is political, its negativity, its sentiment, and its ideology) and across four countries (the United States, Chile, Germany, and Italy). GPT-4 coding is highly accurate, especially for shorter texts such as tweets, correctly classifying texts up to 95\% of the time. Performance drops for longer news articles, and very slightly for non-English text. We introduce a ``hybrid'' coding approach, in which disagreements of multiple GPT-4 runs are adjudicated by a human expert, which boosts accuracy. Finally, we explore downstream effects, finding that transformer models trained on hand-coded or GPT-4-coded data yield almost identical outcomes. Our results suggests that LLM-assisted coding is a viable and cost-efficient approach, although consideration should be given to task complexity.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Samay Ashar (2025). Large Language Models Comparison Dataset [Dataset]. https://www.kaggle.com/datasets/samayashar/large-language-models-comparison-dataset
Organization logo

Large Language Models Comparison Dataset

Compare LLMs: Speed, Cost, and Performance at a Glance!

Explore at:
34 scholarly articles cite this dataset (View in Google Scholar)
zip(5894 bytes)Available download formats
Dataset updated
Feb 24, 2025
Authors
Samay Ashar
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

This dataset provides a comparison of various Large Language Models (LLMs) based on their performance, cost, and efficiency. It includes important details like speed, latency, benchmarks, and pricing, helping users understand how different models stack up against each other.

Key Details:

  • File Name: llm_comparison_dataset.csv
  • Size: 14.57 kB
  • Total Columns: 15
  • License: CC0 (Public Domain)

What’s Inside?

Here are some of the key metrics included in the dataset:

  1. Context Window: Maximum number of tokens the model can process at once.
  2. Speed (tokens/sec): How fast the model generates responses.
  3. Latency (sec): Time delay before the model responds.
  4. Benchmark Scores: Performance ratings from MMLU (academic tasks) and Chatbot Arena (real-world chatbot performance).
  5. Open-Source: Indicates if the model is publicly available or proprietary.
  6. Price per Million Tokens: The cost of using the model for one million tokens.
  7. Training Dataset Size: Amount of data used to train the model.
  8. Compute Power: Resources needed to run the model.
  9. Energy Efficiency: How much power the model consumes.

This dataset is useful for researchers, developers, and AI enthusiasts who want to compare LLMs and choose the best one based on their needs.

📌If you find this dataset useful, do give an upvote :)

Search
Clear search
Close search
Google apps
Main menu