85 datasets found
  1. h

    clinical-synthetic-text-llm

    • huggingface.co
    Updated Jul 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ran Xu (2024). clinical-synthetic-text-llm [Dataset]. https://huggingface.co/datasets/ritaranx/clinical-synthetic-text-llm
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 5, 2024
    Authors
    Ran Xu
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Data Description

    We release the synthetic data generated using the method described in the paper Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models (ACL 2024 Findings). The external knowledge we use is based on LLM-generated topics and writing styles.

      Generated Datasets
    

    The original train/validation/test data, and the generated synthetic training data are listed as follows. For each dataset, we generate 5000… See the full description on the dataset page: https://huggingface.co/datasets/ritaranx/clinical-synthetic-text-llm.

  2. f

    Data Sheet 2_Large language models generating synthetic clinical datasets: a...

    • frontiersin.figshare.com
    • figshare.com
    xlsx
    Updated Feb 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Austin A. Barr; Joshua Quan; Eddie Guo; Emre Sezgin (2025). Data Sheet 2_Large language models generating synthetic clinical datasets: a feasibility and comparative analysis with real-world perioperative data.xlsx [Dataset]. http://doi.org/10.3389/frai.2025.1533508.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 5, 2025
    Dataset provided by
    Frontiers
    Authors
    Austin A. Barr; Joshua Quan; Eddie Guo; Emre Sezgin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundClinical data is instrumental to medical research, machine learning (ML) model development, and advancing surgical care, but access is often constrained by privacy regulations and missing data. Synthetic data offers a promising solution to preserve privacy while enabling broader data access. Recent advances in large language models (LLMs) provide an opportunity to generate synthetic data with reduced reliance on domain expertise, computational resources, and pre-training.ObjectiveThis study aims to assess the feasibility of generating realistic tabular clinical data with OpenAI’s GPT-4o using zero-shot prompting, and evaluate the fidelity of LLM-generated data by comparing its statistical properties to the Vital Signs DataBase (VitalDB), a real-world open-source perioperative dataset.MethodsIn Phase 1, GPT-4o was prompted to generate a dataset with qualitative descriptions of 13 clinical parameters. The resultant data was assessed for general errors, plausibility of outputs, and cross-verification of related parameters. In Phase 2, GPT-4o was prompted to generate a dataset using descriptive statistics of the VitalDB dataset. Fidelity was assessed using two-sample t-tests, two-sample proportion tests, and 95% confidence interval (CI) overlap.ResultsIn Phase 1, GPT-4o generated a complete and structured dataset comprising 6,166 case files. The dataset was plausible in range and correctly calculated body mass index for all case files based on respective heights and weights. Statistical comparison between the LLM-generated datasets and VitalDB revealed that Phase 2 data achieved significant fidelity. Phase 2 data demonstrated statistical similarity in 12/13 (92.31%) parameters, whereby no statistically significant differences were observed in 6/6 (100.0%) categorical/binary and 6/7 (85.71%) continuous parameters. Overlap of 95% CIs were observed in 6/7 (85.71%) continuous parameters.ConclusionZero-shot prompting with GPT-4o can generate realistic tabular synthetic datasets, which can replicate key statistical properties of real-world perioperative data. This study highlights the potential of LLMs as a novel and accessible modality for synthetic data generation, which may address critical barriers in clinical data access and eliminate the need for technical expertise, extensive computational resources, and pre-training. Further research is warranted to enhance fidelity and investigate the use of LLMs to amplify and augment datasets, preserve multivariate relationships, and train robust ML models.

  3. LLM Prompt Recovery - Synthetic Datastore

    • kaggle.com
    zip
    Updated Feb 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Darien Schettler (2024). LLM Prompt Recovery - Synthetic Datastore [Dataset]. https://www.kaggle.com/datasets/dschettler8845/llm-prompt-recovery-synthetic-datastore
    Explore at:
    zip(988448 bytes)Available download formats
    Dataset updated
    Feb 29, 2024
    Authors
    Darien Schettler
    License

    https://www.licenses.ai/ai-licenseshttps://www.licenses.ai/ai-licenses

    Description

    High Level Description

    This dataset uses Gemma 7B-IT to generate synthetic dataset for the LLM Prompt Recovery competition.

    Contributors

    Please go upvote these other datasets as my work is not possible without them

    First Dataset - 1000 Examples From @thedrcat

    Update 1 - February 29, 2024

    The only file presently found in this dataset is gemma1000_7b.csv which uses the dataset created by @thedrcat found here: https://www.kaggle.com/datasets/thedrcat/llm-prompt-recovery-data?select=gemma1000.csv

    The file below is the file Darek created with two additional columns appended. The first is the output of Gemma 7B-IT (raw based on the instructions below)(vs. 2B-IT that Darek used) and the second is the output with the 'Sure... blah blah

    ' sentence removed.

    I generated things using the following setup:

    # I used a vLLM server to host Gemma 7B on paperspace (A100)
    
    # Step 1 - Install vLLM
    >>> pip install vllm
    
    # Step 2 - Authenticate HuggingFace CLI (for model weights)
    >>> huggingface-cli login --token
    
  4. h

    Verbalized-Sampling-Synthetic-Data-Generation

    • huggingface.co
    Updated Oct 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CHATS-Lab (2025). Verbalized-Sampling-Synthetic-Data-Generation [Dataset]. https://huggingface.co/datasets/CHATS-Lab/Verbalized-Sampling-Synthetic-Data-Generation
    Explore at:
    Dataset updated
    Oct 31, 2025
    Dataset authored and provided by
    CHATS-Lab
    Description

    Verbalized-Sampling-Synthetic-Data-Generation

    This dataset showcases how Verbalized Sampling (VS) can be used to generate high-quality, diverse synthetic training data for mathematical reasoning tasks. From the paper Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity.

      Dataset Description
    

    The Synthetic Data Generation dataset contains mathematical problem-solution pairs generated by different methods using state-of-the-art LLMs. This dataset… See the full description on the dataset page: https://huggingface.co/datasets/CHATS-Lab/Verbalized-Sampling-Synthetic-Data-Generation.

  5. G

    Synthetic Pretraining Data for LLMs Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Pretraining Data for LLMs Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-pretraining-data-for-llms-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Pretraining Data for LLMs Market Outlook



    According to our latest research, the synthetic pretraining data for LLMs market size reached USD 1.42 billion globally in 2024, with a robust compound annual growth rate (CAGR) of 32.8% projected through the forecast period. By 2033, the market is anticipated to expand to approximately USD 17.95 billion, driven primarily by the exponential demand for large language models (LLMs) in diverse sectors such as technology, healthcare, and finance. This rapid growth is underpinned by the increasing sophistication of generative AI models and the escalating need for high-quality, scalable, and ethically sourced pretraining datasets.




    One of the primary growth factors for the synthetic pretraining data for LLMs market is the surge in adoption of artificial intelligence across industries. As organizations strive to develop more accurate, context-aware, and robust language models, the limitations of traditional data sources—such as privacy concerns, data scarcity, and bias—have become more pronounced. Synthetic data offers a compelling solution by enabling the generation of large-scale, diverse, and customizable datasets that can be tailored to specific training requirements. This not only accelerates model development cycles but also mitigates the risks associated with using real-world data, fostering innovation and compliance in AI-driven enterprises.




    Another significant driver is the technological advancements in data generation tools and algorithms. With the advent of sophisticated generative models, such as GANs (Generative Adversarial Networks) and transformer-based architectures, the fidelity and realism of synthetic pretraining data have improved dramatically. These advancements have made it feasible to generate multi-modal, domain-specific, and highly representative datasets that closely mimic real-world scenarios, thereby enhancing the performance and generalizability of LLMs. Furthermore, the integration of synthetic data pipelines into existing AI workflows is becoming increasingly streamlined, reducing operational complexity and enabling seamless scalability for organizations of all sizes.




    The evolving regulatory landscape also plays a pivotal role in shaping the synthetic pretraining data for LLMs market. Stringent data privacy regulations, such as GDPR in Europe and CCPA in California, have heightened the importance of data anonymization and ethical AI practices. Synthetic data generation addresses these regulatory challenges by providing a privacy-preserving alternative to real user data, thus ensuring compliance while maintaining model performance. This regulatory push is compelling organizations, especially in highly regulated sectors like healthcare and finance, to adopt synthetic data solutions as a core component of their AI strategy, further fueling market growth.




    From a regional perspective, North America currently leads the global synthetic pretraining data for LLMs market, accounting for the largest share in 2024. This dominance is attributed to the presence of major technology players, a vibrant AI research ecosystem, and robust investments in AI infrastructure. Europe follows closely, propelled by its strong regulatory framework and growing focus on ethical AI. Meanwhile, the Asia Pacific region is witnessing the fastest growth, driven by rapid digital transformation, increasing AI adoption in emerging economies, and significant government initiatives to foster AI innovation. Collectively, these regional trends underscore the global momentum behind synthetic pretraining data solutions and their critical role in the next generation of language models.





    Data Type Analysis



    The synthetic pretraining data for LLMs market is segmented by data type into text, code, multimodal, domain-specific, and others. The text data segment currently dominates the market, reflecting the foundational role of textual data in training most LLMs. Textual synthetic data is extensive

  6. Public Domain Synthetic Datasets

    • kaggle.com
    zip
    Updated Aug 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Anderson (2024). Public Domain Synthetic Datasets [Dataset]. https://www.kaggle.com/datasets/thomasanderson1962/public-domain-synthetic-datasets
    Explore at:
    zip(748302 bytes)Available download formats
    Dataset updated
    Aug 5, 2024
    Authors
    Thomas Anderson
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset is sourced from running the Synthetic Dataset Creation w/ InternVL2 script. The dataset was made in mind for compatibility for LLM Finetuning Script which finetunes Large Language Models (LLM) through the use of datasets. This is just an example of how a dataset is supposed to be structured for the LLM Finetuning Script. Feel free to make your own datasets with the help of the Synthetic Dataset Creation w/ InternVL2 script.

  7. h

    uk_retail_store_synthetic_dataset

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syncora.ai - Agentic Synthetic Data Platform, uk_retail_store_synthetic_dataset [Dataset]. https://huggingface.co/datasets/syncora/uk_retail_store_synthetic_dataset
    Explore at:
    Authors
    Syncora.ai - Agentic Synthetic Data Platform
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Area covered
    United Kingdom
    Description

    Synthetic Data Generation Demo — UK Retail Dataset

    Welcome to this synthetic data generation demo repository by Syncora.ai. This project showcases how to generate synthetic data using real-world tabular structures, demonstrated on a UK retail dataset with columns such as:

    Country
    CustomerID
    UnitPrice
    InvoiceDate
    Quantity
    StockCode

    This dataset is designed for dataset for LLM training and AI development, enabling developers to work with privacy-safe, high-quality… See the full description on the dataset page: https://huggingface.co/datasets/syncora/uk_retail_store_synthetic_dataset.

  8. R

    Synthetic Data Generation Market Size, Share & Growth Forecast 2035

    • researchnester.com
    Updated Sep 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Nester (2025). Synthetic Data Generation Market Size, Share & Growth Forecast 2035 [Dataset]. https://www.researchnester.com/reports/synthetic-data-generation-market/5711
    Explore at:
    Dataset updated
    Sep 16, 2025
    Dataset authored and provided by
    Research Nester
    License

    https://www.researchnester.comhttps://www.researchnester.com

    Description

    The global synthetic data generation market size was worth over USD 447.16 million in 2025 and is poised to witness a CAGR of over 34.7%, crossing USD 8.79 billion revenue by 2035, fueled by Increased use of Large Language Models (LLM)

  9. r

    Synthetic datasets generated by Large Language Models

    • resodate.org
    Updated May 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yanco Amor Torterolo Orta; Yanco Amor Torterolo Orta; Sofía Micaela Roseti; Sofía Micaela Roseti; Antonio Moreno-Sandoval; Antonio Moreno-Sandoval (2025). Synthetic datasets generated by Large Language Models [Dataset]. http://doi.org/10.21950/YXP8Q8
    Explore at:
    Dataset updated
    May 27, 2025
    Dataset provided by
    Universidad Autónoma de Madrid
    GRESEL-UAM: Narrativas Financieras y Literatura
    Eciencia Data
    Authors
    Yanco Amor Torterolo Orta; Yanco Amor Torterolo Orta; Sofía Micaela Roseti; Sofía Micaela Roseti; Antonio Moreno-Sandoval; Antonio Moreno-Sandoval
    Description

    This dataset is the result of the work done in the project GRESEL-UAM: About GRESEL: AI Generation Results Enriched with Simplified Explanations Based on Linguistic Features (Resultados de Generación de IA Enriquecidos con Explicaciones Simplificadas Basadas en Características Lingüísticas). This dataset is part of the publication titled "Assessing a Literary RAG System with a Human-Evaluated Synthetic QA Dataset Generated by an LLM: Experiments with Knowledge Graphs," which will be presented in September 2025 in Zaragoza, within the framework of the conference of the Sociedad Española para el Procesamiento del Lenguaje Natural (SEPLN). The work has already been accepted for publication in SEPLN’s official journal, Procesamiento del Lenguaje Natural. This dataset consists of three synthetically generated datasets, a process known as Synthetic Data Generation (SDG). We used three different LLMs: deepseek-r1:14b, llama3.1:8b-instruct-q8_0, and mistral:7b-instruct. Each was given a prompt instructing them to generate a question answering (QA) dataset based on context fragments from the novel Trafalgar by Benito Pérez Galdós. These datasets were later used to evaluate a Retrieval-Augmented Generation (RAG) system. Three CSV files are provided, each corresponding to the synthetic dataset generated by one of the models. In total, the dataset contains 359 items. The header includes the following fields: id, context, question, answer, and success. Fields are separated by tabs. The id column is simply an identifier number. The context column contains the text fragment from which the model generated the questions and answers. The question and answer fields contain the generated questions and answers, respectively. The success column indicates whether the model successfully generated the question and answer in the corresponding fields ("yes" or "no").

  10. G

    Synthetic Data Generation for NLP Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Data Generation for NLP Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-data-generation-for-nlp-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Data Generation for NLP Market Outlook



    According to our latest research, the synthetic data generation for NLP market size reached USD 420 million globally in 2024, reflecting strong momentum driven by the rapid adoption of artificial intelligence across industries. The market is projected to expand at a robust CAGR of 32.4% from 2025 to 2033, reaching a forecasted value of USD 4.7 billion by 2033. This remarkable growth is primarily fueled by the increasing demand for high-quality, privacy-compliant data to train advanced natural language processing models, as well as the rising need to overcome data scarcity and bias in AI applications.



    One of the most significant growth factors for the synthetic data generation for NLP market is the escalating requirement for large, diverse, and unbiased datasets to power next-generation NLP models. As organizations across sectors such as BFSI, healthcare, retail, and IT accelerate AI adoption, the limitations of real-world datasets—such as privacy risks, regulatory constraints, and inherent biases—become more pronounced. Synthetic data offers a compelling solution by generating realistic, high-utility language data without exposing sensitive information. This capability is particularly valuable in highly regulated industries, where compliance with data protection laws like GDPR and HIPAA is mandatory. As a result, enterprises are increasingly integrating synthetic data generation solutions into their NLP pipelines to enhance model accuracy, mitigate bias, and ensure robust data privacy.



    Another key driver is the rapid technological advancements in generative AI and deep learning, which have significantly improved the quality and realism of synthetic language data. Recent breakthroughs in large language models (LLMs) and generative adversarial networks (GANs) have enabled the creation of synthetic text that closely mimics human language, making it suitable for a wide range of NLP applications including text classification, sentiment analysis, and machine translation. The growing availability of scalable, cloud-based synthetic data generation platforms further accelerates adoption, enabling organizations of all sizes to access cutting-edge tools without substantial upfront investment. This democratization of synthetic data technology is expected to propel market growth over the forecast period.



    The proliferation of AI-driven automation and digital transformation initiatives across enterprises is also catalyzing the demand for synthetic data generation for NLP. As businesses seek to automate customer service, enhance content moderation, and personalize user experiences, the need for large-scale, high-quality NLP training data is surging. Synthetic data not only enables faster model development and deployment but also supports continuous learning and adaptation in dynamic environments. Moreover, the ability to generate rare or edge-case language data allows organizations to build more robust and resilient NLP systems, further driving market expansion.



    From a regional perspective, North America currently dominates the synthetic data generation for NLP market, accounting for over 37% of global revenue in 2024. This leadership is attributed to the strong presence of leading AI technology vendors, early adoption of NLP solutions, and a favorable regulatory landscape that encourages innovation. Europe follows closely, driven by stringent data privacy regulations and significant investment in AI research. The Asia Pacific region is poised for the fastest growth, with a projected CAGR of 36% through 2033, fueled by rapid digitalization, expanding AI ecosystems, and increasing government support for AI initiatives. Other regions such as Latin America and the Middle East & Africa are also witnessing growing interest, albeit from a smaller base, as enterprises in these markets begin to recognize the value of synthetic data for NLP applications.





    Component Analysis



    The synthetic data generation for NLP market is s

  11. LLM - Detect AI Datamix

    • kaggle.com
    zip
    Updated Jan 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raja Biswas (2024). LLM - Detect AI Datamix [Dataset]. https://www.kaggle.com/datasets/conjuring92/ai-mix-v26
    Explore at:
    zip(172818297 bytes)Available download formats
    Dataset updated
    Jan 19, 2024
    Authors
    Raja Biswas
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This is the datamix created by Team 🔍 📝 🕵️‍♂️ 🤖 during the LLM - Detect AI Generated Text competition. This dataset helped us to win the competition. It facilitates a text-classification task to separate LLM generate essays from the student written ones.

    It was developed in an incremental way focusing on size, diversity and complexity. For each datamix iteration, we attempted to plug blindspots of the previous generation models while maintaining robustness.

    To maximally leverage in-domain human texts, we used the entire Persuade corpus comprising all 15 prompts. We also included diverse human texts from sources such as OpenAI GPT2 output dataset, ELLIPSE corpus, NarrativeQA, wikipedia, NLTK Brown corpus and IMDB movie reviews.

    Sources for our generated essays can be grouped under four categories: - Proprietary LLMs (gpt-3.5, gpt-4, claude, cohere, gemini, palm) - Open source LLMs (llama, falcon, mistral, mixtral) - Existing LLM generated text datasets - Synthetic dataset made by T5 - DAIGT V2 subset - OUTFOX - Ghostbuster - gpt-2-output-dataset

    • Fine-tuned open-source LLMs (mistral, llama, falcon, deci-lm, t5, pythia, OPT, BLOOM, GPT2). For LLM fine-tuning, we leveraged the PERSUADE corpus in different ways:
      • Instruction tuning: Instructions were composed of different metadata e.g. prompt name, holistic essay score, ELL status and grade level. Responses were the corresponding student essays.
      • One topic held out: LLMs fine-tuned on PERSUADE essays with one prompt held out. When generating, only the held out prompt essays were generated. This was done to encourage new writing styles.
      • Span wise generation: Generate one span (discourse) at a time conditioned on the remaining essay.

    We used a wide variety of generation configs and prompting strategies to promote diversity & complexity to the data. Generated essays leveraged a combination of the following: - Contrastive search - Use of Guidance scale, typical_p, suppress_tokens - High temperature & large values of top-k - Prompting to fill-in-the-blank: randomly mask words in an essay and asking LLM to reconstruct the original essay (similar to MLM) - Prompting without source texts - Prompting with source texts - Prompting to rewrite existing essays

    Finally, we incorporated augmented essays to make our models aware of typical attacks on LLM content detection systems and obfuscations present in the provided training data. We mainly used a combination of the following augmentations on a random subset of essays: - Spelling correction - Deletion/insertion/swapping of characters - Replacement with synonym - Introduce obfuscations - Back translation - Random capitalization - Swap sentence

  12. SVG Code Generation Sample Training Data

    • kaggle.com
    zip
    Updated May 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vinothkumar Sekar (2025). SVG Code Generation Sample Training Data [Dataset]. https://www.kaggle.com/datasets/vinothkumarsekar89/svg-generation-sample-training-data
    Explore at:
    zip(193477 bytes)Available download formats
    Dataset updated
    May 3, 2025
    Authors
    Vinothkumar Sekar
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This training data was generated using GPT-4o as part of the 'Drawing with LLM' competition (https://www.kaggle.com/competitions/drawing-with-llms). It can be used to fine-tune small language models for the competition or serve as an augmentation dataset alongside other data sources.

    The dataset is generated in two steps using the GPT-4o model. - In the first step, topic descriptions relevant to the competition are generated using a specific prompt. By running this prompt multiple times, over 3,000 descriptions were collected.

     
    prompt=f""" I am participating in an SVG code generation competition.
      
       The competition involves generating SVG images based on short textual descriptions of everyday objects and scenes, spanning a wide range of categories. The key guidelines are as follows:
      
       - Descriptions are generic and do not contain brand names, trademarks, or personal names.
       - No descriptions include people, even in generic terms.
       - Descriptions are concise—each is no more than 200 characters, with an average length of about 50 characters.
       - Categories cover various domains, with some overlap between public and private test sets.
      
       To train a small LLM model, I am preparing a synthetic dataset. Could you generate 100 unique topics aligned with the competition style?
      
       Requirements:
       - Each topic should range between **20 and 200 characters**, with an **average around 60 characters**.
       - Ensure **diversity and creativity** across topics.
       - **50% of the topics** should come from the categories of **landscapes**, **abstract art**, and **fashion**.
       - Avoid duplication or overly similar phrasing.
      
       Example topics:
                     a purple forest at dusk, gray wool coat with a faux fur collar, a lighthouse overlooking the ocean, burgundy corduroy, pants with patch pockets and silver buttons, orange corduroy overalls, a purple silk scarf with tassel trim, a green lagoon under a cloudy sky, crimson rectangles forming a chaotic grid,  purple pyramids spiraling around a bronze cone, magenta trapezoids layered on a translucent silver sheet,  a snowy plain, black and white checkered pants,  a starlit night over snow-covered peaks, khaki triangles and azure crescents,  a maroon dodecahedron interwoven with teal threads.
      
       Please return the 100 topics in csv format.
       """
     
    • In the second step, SVG code is generated by prompting the GPT-4o model. The following prompt is used to query the model to generate svg.
     
      prompt = f"""
          Generate SVG code to visually represent the following text description, while respecting the given constraints.
          
          Allowed Elements: `svg`, `path`, `circle`, `rect`, `ellipse`, `line`, `polyline`, `polygon`, `g`, `linearGradient`, `radialGradient`, `stop`, `defs`
          Allowed Attributes: `viewBox`, `width`, `height`, `fill`, `stroke`, `stroke-width`, `d`, `cx`, `cy`, `r`, `x`, `y`, `rx`, `ry`, `x1`, `y1`, `x2`, `y2`, `points`, `transform`, `opacity`
          
    
          Please ensure that the generated SVG code is well-formed, valid, and strictly adheres to these constraints. 
          Focus on a clear and concise representation of the input description within the given limitations. 
          Always give the complete SVG code with nothing omitted. Never use an ellipsis.
    
          The code is scored based on similarity to the description, Visual question anwering and aesthetic components.
          Please generate a detailed svg code accordingly.
    
          input description: {text}
          """
     

    The raw SVG output is then cleaned and sanitized using a competition-specific sanitization class. After that, the cleaned SVG is scored using the SigLIP model to evaluate text-to-SVG similarity. Only SVGs with a score above 0.5 are included in the dataset. On average, out of three SVG generations, only one meets the quality threshold after the cleaning, sanitization, and scoring process.

    A dataset with ~50,000 samples for SVG code generation is publicly available at: https://huggingface.co/datasets/vinoku89/svg-code-generation

  13. synthetic-medical-records-dataset

    • kaggle.com
    zip
    Updated Sep 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syncora_ai (2025). synthetic-medical-records-dataset [Dataset]. https://www.kaggle.com/datasets/syncoraai/synthetic-medical-records-dataset
    Explore at:
    zip(1582643 bytes)Available download formats
    Dataset updated
    Sep 11, 2025
    Authors
    Syncora_ai
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Synthetic Healthcare Dataset — Powered by Syncora

    High-Fidelity Synthetic Medical Records for AI, ML Modeling, LLM Training & HealthTech Research

    About This Dataset

    This is a synthetic dataset of healthcare records generated using Syncora.ai, a next-generation synthetic data generation platform designed for privacy-safe AI development.

    It simulates patient demographics, medical conditions, treatments, billing, and admission data, preserving statistical realism while ensuring 0% privacy risk.

    This free dataset is designed for:

    • Healthcare AI research
    • Predictive analytics (disease risk, treatment outcomes)
    • LLM training on structured tabular healthcare data
    • Medical data science education & experimentation

    Think of this as fake data that mimics real-world healthcare patterns — statistically accurate, but without any sensitive patient information.

    Dataset Context & Features

    The dataset captures patient-level hospital information, including:

    • Demographics: Age, Gender, Blood Type
    • Medical Details: Diagnosed medical condition, prescribed medication, test results
    • Hospital Records: Admission type (emergency, planned, outpatient), billing amount
    • Target Applications: Predictive modeling, anomaly detection, cost optimization

    All records are 100% synthetic, maintaining the statistical properties of real-world healthcare data while remaining safe to share and use for ML & LLM tasks.

    LLM Training & Generative AI Applications 🧠

    Unlike most healthcare datasets, this one is tailored for LLM training:

    • Fine-tune LLMs on tabular + medical data for reasoning tasks
    • Create medical report generators from structured fields (e.g., convert demographics + condition + test results into natural language summaries)
    • Use as fake data for prompt engineering, synthetic QA pairs, or generative simulations
    • Safely train LLMs to understand healthcare schemas without exposing private patient data

    Machine Learning & AI Use Cases

    • Predictive Modeling: Forecast patient outcomes or readmission likelihood
    • Classification: Disease diagnosis prediction using demographic and medical variables
    • Clustering: Patient segmentation by condition, treatment, or billing pattern
    • Healthcare Cost Prediction: Estimate and optimize billing amounts
    • Bias & Fairness Testing: Study algorithmic bias without exposing sensitive patient data

    Why Syncora?

    Syncora.ai is a synthetic data generation platform designed for healthcare, finance, and enterprise AI.

    Key benefits:

    • Privacy-first: 100% synthetic, zero risk of re-identification
    • Statistical accuracy: Feature relationships preserved for ML & LLM training
    • Regulatory compliance: HIPAA, GDPR, DPDP safe
    • Scalability: Generate millions of synthetic patient records with agentic AI

    Ideas for Exploration

    • Which medical conditions correlate with higher billing amounts?
    • Can test results predict hospitalization type?
    • How do demographics influence treatment or billing trends?
    • Can synthetic datasets reduce bias in healthcare AI & LLMs?

    🔗 Generate Your Own Synthetic Data

    Take your AI projects to the next level with Syncora.ai:
    → Generate your own synthetic datasets now

    Licensing & Compliance

    This is a free dataset, 100% synthetic, and contains no real patient information.
    It is safe for public use in education, research, open-source contributions, LLM training, and AI development.

  14. customer support conversations

    • kaggle.com
    zip
    Updated Oct 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syncora_ai (2025). customer support conversations [Dataset]. https://www.kaggle.com/datasets/syncoraai/customer-support-conversations/code
    Explore at:
    zip(303724713 bytes)Available download formats
    Dataset updated
    Oct 9, 2025
    Authors
    Syncora_ai
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Customer Support Conversation Dataset — Powered by Syncora.ai

    High-quality synthetic dataset for chatbot training, LLM fine-tuning, and AI research in conversational systems.

    About This Dataset

    This dataset provides a fully synthetic collection of customer support interactions, generated using Syncora.ai’s synthetic data generation engine.
    It mirrors realistic support conversations across e-commerce, banking, SaaS, and telecom domains, ensuring diversity, context depth, and privacy-safe realism.

    Each conversation simulates multi-turn dialogues between a customer and a support agent, making it ideal for training chatbots, LLMs, and retrieval-augmented generation (RAG) systems.

    This is a free dataset, designed for LLM training, chatbot model fine-tuning, and dialogue understanding research.

    Dataset Context & Features

    FeatureDescription
    conversation_idUnique identifier for each dialogue session
    domainIndustry domain (e.g., banking, telecom, retail)
    roleSpeaker role: customer or support agent
    messageMessage text (synthetic conversation content)
    intent_labelLabeled customer intent (e.g., refund_request, password_reset)
    resolution_statusWhether the query was resolved or escalated
    sentiment_scoreSentiment polarity of the conversation
    languageLanguage of interaction (supports multilingual synthetic data)

    Use Cases

    • Chatbot Training & Evaluation – Build and fine-tune conversational agents with realistic dialogue data.
    • LLM Training & Alignment – Use as a dataset for LLM training on dialogue tasks.
    • Customer Support Automation – Prototype or benchmark AI-driven support systems.
    • Dialogue Analytics – Study sentiment, escalation patterns, and domain-specific behavior.
    • Synthetic Data Research – Validate synthetic data generation pipelines for conversational systems.

    Why Synthetic?

    • Privacy-Safe – No real user data; fully synthetic and compliant.
    • Scalable – Generate millions of conversations for LLM and chatbot training.
    • Balanced & Bias-Controlled – Ensures diversity and fairness in training data.
    • Instantly Usable – Pre-structured and cleanly labeled for NLP tasks.

    Generate Your Own Synthetic Data

    Use Syncora.ai to generate synthetic conversational datasets for your AI or chatbot projects:
    Try Synthetic Data Generation tool

    License

    This dataset is released under the MIT License.
    It is fully synthetic, free, and safe for LLM training, chatbot model fine-tuning, and AI research.

  15. Training_Data_FineTuning_LLM_PEFT_LORA

    • kaggle.com
    zip
    Updated Aug 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rupak Roy/ Bob (2024). Training_Data_FineTuning_LLM_PEFT_LORA [Dataset]. https://www.kaggle.com/datasets/rupakroy/training-dataset-peft-lora
    Explore at:
    zip(29562174 bytes)Available download formats
    Dataset updated
    Aug 8, 2024
    Authors
    Rupak Roy/ Bob
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The dataset contains conversation summaries, topics, and dialogues used to create the pipeline of fine tunning the LLM model using Parameter Efficient Fine Tunning and Low-Rank Adaptation of Large Language Models) which is a popular and lightweight training technique that significantly reduces the number of trainable parameters.

    The dataset is also available in the hugging face. https://huggingface.co/datasets/knkarthick/dialogsum

  16. h

    customer_support_conversations_dataset

    • huggingface.co
    Updated Oct 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syncora.ai - Agentic Synthetic Data Platform (2025). customer_support_conversations_dataset [Dataset]. https://huggingface.co/datasets/syncora/customer_support_conversations_dataset
    Explore at:
    Dataset updated
    Oct 10, 2025
    Authors
    Syncora.ai - Agentic Synthetic Data Platform
    Description

    💬 Customer Support Conversation Dataset — Powered by Syncora.ai

    A free synthetic dataset for chatbot training, LLM fine-tuning, and synthetic data generation research.Created using Syncora.ai’s privacy-safe synthetic data engine, this dataset is ideal for developing, testing, and benchmarking AI customer support systems. It serves as a dataset for chatbot training and a dataset for LLM training, offering rich, structured conversation data for real-world simulation.

      🌟… See the full description on the dataset page: https://huggingface.co/datasets/syncora/customer_support_conversations_dataset.
    
  17. G

    Synthetic Data for NLP Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Synthetic Data for NLP Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/synthetic-data-for-nlp-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Oct 7, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Synthetic Data for NLP Market Outlook




    According to our latest research, the global Synthetic Data for NLP market size reached USD 635 million in 2024, with a robust growth trajectory underpinned by rising adoption across industries. The market is projected to expand at a CAGR of 34.7% during the forecast period, reaching an estimated USD 7.6 billion by 2033. This exceptional growth is primarily driven by the increasing need for high-quality, diverse, and privacy-compliant datasets for natural language processing (NLP) model training and testing, as organizations face mounting data privacy regulations and seek to accelerate AI innovation.




    One of the most significant growth factors in the Synthetic Data for NLP market is the escalating demand for large-scale annotated datasets required to train advanced NLP models, such as those used in generative AI, conversational interfaces, and automated sentiment analysis. Traditional data collection methods are often hampered by privacy concerns, data scarcity, and the high costs of manual annotation. Synthetic data generation addresses these challenges by enabling the creation of vast, customizable datasets that mirror real-world linguistic complexity without exposing sensitive information. As organizations increasingly deploy NLP solutions in customer service, healthcare, finance, and beyond, the ability to generate synthetic text, audio, and multimodal data at scale is transforming the AI development lifecycle and reducing time-to-market for new applications.




    Another key driver is the evolving regulatory landscape surrounding data privacy and security, particularly in regions such as Europe and North America. The introduction of stringent frameworks like GDPR and CCPA has limited the availability of real-world data for AI training, making synthetic data an attractive alternative for compliance-conscious enterprises. Unlike traditional anonymization techniques, synthetic data preserves statistical properties and semantic relationships, ensuring model performance without risking re-identification. This capability is especially valuable in sectors such as healthcare and banking, where data sensitivity is paramount. The growing recognition of synthetic data as a privacy-enhancing technology is fueling investments in research, platform development, and cross-industry collaborations, further propelling market expansion.




    Technological advancements in generative models, including large language models (LLMs) and diffusion models, have also accelerated the adoption of synthetic data for NLP. These innovations enable the automated generation of highly realistic and contextually rich text, audio, and multimodal datasets, supporting complex NLP tasks such as machine translation, named entity recognition, and intent classification. The integration of synthetic data solutions with cloud-based AI development platforms and MLOps workflows is streamlining dataset creation, curation, and validation, making it easier for organizations of all sizes to leverage synthetic data. As a result, both established enterprises and startups are embracing synthetic data to overcome data bottlenecks, enhance AI model robustness, and unlock new use cases across languages, dialects, and domains.




    Regionally, North America leads the Synthetic Data for NLP market in both market share and innovation, driven by the presence of major technology firms, research institutions, and a mature AI ecosystem. Europe follows closely, supported by strong regulatory frameworks and a growing focus on ethical AI. The Asia Pacific region is emerging as a high-growth market, fueled by rapid digital transformation, increasing AI investments, and a burgeoning startup landscape. Latin America and the Middle East & Africa are also experiencing steady adoption, particularly in sectors such as banking, telecommunications, and e-commerce. Overall, the global market is characterized by dynamic regional trends, with each geography exhibiting unique drivers, challenges, and opportunities for synthetic data adoption in NLP.





    Data Type

  18. d

    AI Training Data | Annotated Checkout Flows for Retail, Restaurant, and...

    • datarade.ai
    Updated Dec 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MealMe (2024). AI Training Data | Annotated Checkout Flows for Retail, Restaurant, and Marketplace Websites [Dataset]. https://datarade.ai/data-products/ai-training-data-annotated-checkout-flows-for-retail-resta-mealme
    Explore at:
    Dataset updated
    Dec 18, 2024
    Dataset authored and provided by
    MealMe
    Area covered
    United States of America
    Description

    AI Training Data | Annotated Checkout Flows for Retail, Restaurant, and Marketplace Websites Overview

    Unlock the next generation of agentic commerce and automated shopping experiences with this comprehensive dataset of meticulously annotated checkout flows, sourced directly from leading retail, restaurant, and marketplace websites. Designed for developers, researchers, and AI labs building large language models (LLMs) and agentic systems capable of online purchasing, this dataset captures the real-world complexity of digital transactions—from cart initiation to final payment.

    Key Features

    Breadth of Coverage: Over 10,000 unique checkout journeys across hundreds of top e-commerce, food delivery, and service platforms, including but not limited to Walmart, Target, Kroger, Whole Foods, Uber Eats, Instacart, Shopify-powered sites, and more.

    Actionable Annotation: Every flow is broken down into granular, step-by-step actions, complete with timestamped events, UI context, form field details, validation logic, and response feedback. Each step includes:

    Page state (URL, DOM snapshot, and metadata)

    User actions (clicks, taps, text input, dropdown selection, checkbox/radio interactions)

    System responses (AJAX calls, error/success messages, cart/price updates)

    Authentication and account linking steps where applicable

    Payment entry (card, wallet, alternative methods)

    Order review and confirmation

    Multi-Vertical, Real-World Data: Flows sourced from a wide variety of verticals and real consumer environments, not just demo stores or test accounts. Includes complex cases such as multi-item carts, promo codes, loyalty integration, and split payments.

    Structured for Machine Learning: Delivered in standard formats (JSONL, CSV, or your preferred schema), with every event mapped to action types, page features, and expected outcomes. Optional HAR files and raw network request logs provide an extra layer of technical fidelity for action modeling and RLHF pipelines.

    Rich Context for LLMs and Agents: Every annotation includes both human-readable and model-consumable descriptions:

    “What the user did” (natural language)

    “What the system did in response”

    “What a successful action should look like”

    Error/edge case coverage (invalid forms, OOS, address/payment errors)

    Privacy-Safe & Compliant: All flows are depersonalized and scrubbed of PII. Sensitive fields (like credit card numbers, user addresses, and login credentials) are replaced with realistic but synthetic data, ensuring compliance with privacy regulations.

    Each flow tracks the user journey from cart to payment to confirmation, including:

    Adding/removing items

    Applying coupons or promo codes

    Selecting shipping/delivery options

    Account creation, login, or guest checkout

    Inputting payment details (card, wallet, Buy Now Pay Later)

    Handling validation errors or OOS scenarios

    Order review and final placement

    Confirmation page capture (including order summary details)

    Why This Dataset?

    Building LLMs, agentic shopping bots, or e-commerce automation tools demands more than just page screenshots or API logs. You need deeply contextualized, action-oriented data that reflects how real users interact with the complex, ever-changing UIs of digital commerce. Our dataset uniquely captures:

    The full intent-action-outcome loop

    Dynamic UI changes, modals, validation, and error handling

    Nuances of cart modification, bundle pricing, delivery constraints, and multi-vendor checkouts

    Mobile vs. desktop variations

    Diverse merchant tech stacks (custom, Shopify, Magento, BigCommerce, native apps, etc.)

    Use Cases

    LLM Fine-Tuning: Teach models to reason through step-by-step transaction flows, infer next-best-actions, and generate robust, context-sensitive prompts for real-world ordering.

    Agentic Shopping Bots: Train agents to navigate web/mobile checkouts autonomously, handle edge cases, and complete real purchases on behalf of users.

    Action Model & RLHF Training: Provide reinforcement learning pipelines with ground truth “what happens if I do X?” data across hundreds of real merchants.

    UI/UX Research & Synthetic User Studies: Identify friction points, bottlenecks, and drop-offs in modern checkout design by replaying flows and testing interventions.

    Automated QA & Regression Testing: Use realistic flows as test cases for new features or third-party integrations.

    What’s Included

    10,000+ annotated checkout flows (retail, restaurant, marketplace)

    Step-by-step event logs with metadata, DOM, and network context

    Natural language explanations for each step and transition

    All flows are depersonalized and privacy-compliant

    Example scripts for ingesting, parsing, and analyzing the dataset

    Flexible licensing for research or commercial use

    Sample Categories Covered

    Grocery delivery (Instacart, Walmart, Kroger, Target, etc.)

    Restaurant takeout/delivery (Ub...

  19. G

    Evaluation Dataset Curation for LLMs Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Evaluation Dataset Curation for LLMs Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/evaluation-dataset-curation-for-llms-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Evaluation Dataset Curation for LLMs Market Outlook



    According to our latest research, the global Evaluation Dataset Curation for LLMs market size reached USD 1.18 billion in 2024, reflecting robust momentum driven by the proliferation of large language models (LLMs) across industries. The market is projected to expand at a CAGR of 24.7% from 2025 to 2033, reaching a forecasted value of USD 9.01 billion by 2033. This impressive growth is primarily fueled by the surging demand for high-quality, unbiased, and diverse datasets essential for evaluating, benchmarking, and fine-tuning LLMs, as well as for ensuring their safety and fairness in real-world applications.




    The exponential growth of the Evaluation Dataset Curation for LLMs market is underpinned by the rapid advancements in artificial intelligence and natural language processing technologies. As organizations increasingly deploy LLMs for a variety of applications, the need for meticulously curated datasets has become paramount. High-quality datasets are the cornerstone for testing model robustness, identifying biases, and ensuring compliance with ethical standards. The proliferation of domain-specific use cases—from healthcare diagnostics to legal document analysis—has further intensified the demand for specialized datasets tailored to unique linguistic and contextual requirements. Moreover, the growing recognition of dataset quality as a critical determinant of model performance is prompting enterprises and research institutions to invest heavily in advanced curation platforms and services.




    Another significant growth driver for the Evaluation Dataset Curation for LLMs market is the heightened regulatory scrutiny and societal emphasis on AI transparency, fairness, and accountability. Governments and standard-setting bodies worldwide are introducing stringent guidelines to mitigate the risks associated with biased or unsafe AI systems. This regulatory landscape is compelling organizations to adopt rigorous dataset curation practices, encompassing bias detection, fairness assessment, and safety evaluations. As LLMs become integral to decision-making processes in sensitive domains such as finance, healthcare, and public policy, the imperative for trustworthy and explainable AI models is fueling the adoption of comprehensive evaluation datasets. This trend is expected to accelerate as new regulations come into force, further expanding the market’s scope.




    The market is also benefiting from the collaborative efforts between academia, industry, and open-source communities to establish standardized benchmarks and best practices for LLM evaluation. These collaborations are fostering innovation in dataset curation methodologies, including the use of synthetic data generation, crowdsourcing, and automated annotation tools. The integration of multimodal data—combining text, images, and code—is enabling more holistic assessments of LLM capabilities, thereby expanding the market’s addressable segments. Additionally, the emergence of specialized startups focused on dataset curation services is introducing competitive dynamics and driving technological advancements. These factors collectively contribute to the market’s sustained growth trajectory.




    Regionally, North America continues to dominate the Evaluation Dataset Curation for LLMs market, accounting for the largest share in 2024, followed by Europe and Asia Pacific. The United States, in particular, is home to leading AI research institutions, technology giants, and a vibrant ecosystem of startups dedicated to LLM development and evaluation. Europe is witnessing increased investments in AI ethics and regulatory compliance, while Asia Pacific is rapidly emerging as a key growth market due to its expanding AI research capabilities and government-led digital transformation initiatives. Latin America and the Middle East & Africa are also showing promise, albeit from a smaller base, as local enterprises and public sector organizations begin to recognize the strategic importance of robust LLM evaluation frameworks.





    Da

  20. h

    llm-detection-generation-contribution2-train

    • huggingface.co
    Updated Apr 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiacheng Zhu (2024). llm-detection-generation-contribution2-train [Dataset]. https://huggingface.co/datasets/jjz5463/llm-detection-generation-contribution2-train
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 21, 2024
    Authors
    Jiacheng Zhu
    Description

    Dataset Card

    Add more information here

    This dataset was produced with DataDreamer 🤖💤. The synthetic dataset card can be found here.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ran Xu (2024). clinical-synthetic-text-llm [Dataset]. https://huggingface.co/datasets/ritaranx/clinical-synthetic-text-llm

clinical-synthetic-text-llm

ritaranx/clinical-synthetic-text-llm

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jul 5, 2024
Authors
Ran Xu
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

Data Description

We release the synthetic data generated using the method described in the paper Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models (ACL 2024 Findings). The external knowledge we use is based on LLM-generated topics and writing styles.

  Generated Datasets

The original train/validation/test data, and the generated synthetic training data are listed as follows. For each dataset, we generate 5000… See the full description on the dataset page: https://huggingface.co/datasets/ritaranx/clinical-synthetic-text-llm.

Search
Clear search
Close search
Google apps
Main menu