100+ datasets found
  1. f

    Data_Sheet_1_Data and model bias in artificial intelligence for healthcare...

    • frontiersin.figshare.com
    zip
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vithya Yogarajan; Gillian Dobbie; Sharon Leitch; Te Taka Keegan; Joshua Bensemann; Michael Witbrock; Varsha Asrani; David Reith (2023). Data_Sheet_1_Data and model bias in artificial intelligence for healthcare applications in New Zealand.zip [Dataset]. http://doi.org/10.3389/fcomp.2022.1070493.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    Frontiers
    Authors
    Vithya Yogarajan; Gillian Dobbie; Sharon Leitch; Te Taka Keegan; Joshua Bensemann; Michael Witbrock; Varsha Asrani; David Reith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    New Zealand
    Description

    IntroductionDevelopments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand.MethodsThis research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Māori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Māori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME.ResultsThis research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research.DiscussionsThis research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare.

  2. Opinion on mitigating AI data bias in healthcare worldwide 2024

    • statista.com
    Updated Jul 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Opinion on mitigating AI data bias in healthcare worldwide 2024 [Dataset]. https://www.statista.com/statistics/1559311/ways-to-mitigate-ai-bias-in-healthcare-worldwide/
    Explore at:
    Dataset updated
    Jul 18, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Dec 2023 - Mar 2024
    Area covered
    Worldwide
    Description

    According to a survey of healthcare leaders carried out globally in 2024, almost half of respondents believed that by making AI more transparent and interpretable, this would mitigate the risk of data bias in AI applications for healthcare. Furthermore, ** percent of healthcare leaders thought there should be continuous training and education in AI.

  3. Z

    Data from: Qbias – A Dataset on Media Bias in Search Queries and Query...

    • data.niaid.nih.gov
    Updated Mar 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haak, Fabian (2023). Qbias – A Dataset on Media Bias in Search Queries and Query Suggestions [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7682914
    Explore at:
    Dataset updated
    Mar 1, 2023
    Dataset provided by
    Haak, Fabian
    Schaer, Philipp
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present Qbias, two novel datasets that promote the investigation of bias in online news search as described in

    Fabian Haak and Philipp Schaer. 2023. 𝑄𝑏𝑖𝑎𝑠 - A Dataset on Media Bias in Search Queries and Query Suggestions. In Proceedings of ACM Web Science Conference (WebSci’23). ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3578503.3583628.

    Dataset 1: AllSides Balanced News Dataset (allsides_balanced_news_headlines-texts.csv)

    The dataset contains 21,747 news articles collected from AllSides balanced news headline roundups in November 2022 as presented in our publication. The AllSides balanced news feature three expert-selected U.S. news articles from sources of different political views (left, right, center), often featuring spin bias, and slant other forms of non-neutral reporting on political news. All articles are tagged with a bias label by four expert annotators based on the expressed political partisanship, left, right, or neutral. The AllSides balanced news aims to offer multiple political perspectives on important news stories, educate users on biases, and provide multiple viewpoints. Collected data further includes headlines, dates, news texts, topic tags (e.g., "Republican party", "coronavirus", "federal jobs"), and the publishing news outlet. We also include AllSides' neutral description of the topic of the articles. Overall, the dataset contains 10,273 articles tagged as left, 7,222 as right, and 4,252 as center.

    To provide easier access to the most recent and complete version of the dataset for future research, we provide a scraping tool and a regularly updated version of the dataset at https://github.com/irgroup/Qbias. The repository also contains regularly updated more recent versions of the dataset with additional tags (such as the URL to the article). We chose to publish the version used for fine-tuning the models on Zenodo to enable the reproduction of the results of our study.

    Dataset 2: Search Query Suggestions (suggestions.csv)

    The second dataset we provide consists of 671,669 search query suggestions for root queries based on tags of the AllSides biased news dataset. We collected search query suggestions from Google and Bing for the 1,431 topic tags, that have been used for tagging AllSides news at least five times, approximately half of the total number of topics. The topic tags include names, a wide range of political terms, agendas, and topics (e.g., "communism", "libertarian party", "same-sex marriage"), cultural and religious terms (e.g., "Ramadan", "pope Francis"), locations and other news-relevant terms. On average, the dataset contains 469 search queries for each topic. In total, 318,185 suggestions have been retrieved from Google and 353,484 from Bing.

    The file contains a "root_term" column based on the AllSides topic tags. The "query_input" column contains the search term submitted to the search engine ("search_engine"). "query_suggestion" and "rank" represents the search query suggestions at the respective positions returned by the search engines at the given time of search "datetime". We scraped our data from a US server saved in "location".

    We retrieved ten search query suggestions provided by the Google and Bing search autocomplete systems for the input of each of these root queries, without performing a search. Furthermore, we extended the root queries by the letters a to z (e.g., "democrats" (root term) >> "democrats a" (query input) >> "democrats and recession" (query suggestion)) to simulate a user's input during information search and generate a total of up to 270 query suggestions per topic and search engine. The dataset we provide contains columns for root term, query input, and query suggestion for each suggested query. The location from which the search is performed is the location of the Google servers running Colab, in our case Iowa in the United States of America, which is added to the dataset.

    AllSides Scraper

    At https://github.com/irgroup/Qbias, we provide a scraping tool, that allows for the automatic retrieval of all available articles at the AllSides balanced news headlines.

    We want to provide an easy means of retrieving the news and all corresponding information. For many tasks it is relevant to have the most recent documents available. Thus, we provide this Python-based scraper, that scrapes all available AllSides news articles and gathers available information. By providing the scraper we facilitate access to a recent version of the dataset for other researchers.

  4. f

    Navigating News Narratives: A Media Bias Analysis Dataset

    • figshare.com
    txt
    Updated Dec 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaina Raza (2023). Navigating News Narratives: A Media Bias Analysis Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.24422122.v4
    Explore at:
    txtAvailable download formats
    Dataset updated
    Dec 8, 2023
    Dataset provided by
    figshare
    Authors
    Shaina Raza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The prevalence of bias in the news media has become a critical issue, affecting public perception on a range of important topics such as political views, health, insurance, resource distributions, religion, race, age, gender, occupation, and climate change. The media has a moral responsibility to ensure accurate information dissemination and to increase awareness about important issues and the potential risks associated with them. This highlights the need for a solution that can help mitigate against the spread of false or misleading information and restore public trust in the media.Data description: This is a dataset for news media bias covering different dimensions of the biases: political, hate speech, political, toxicity, sexism, ageism, gender identity, gender discrimination, race/ethnicity, climate change, occupation, spirituality, which makes it a unique contribution. The dataset used for this project does not contain any personally identifiable information (PII).The data structure is tabulated as follows:Text: The main content.Dimension: Descriptive category of the text.Biased_Words: A compilation of words regarded as biased.Aspect: Specific sub-topic within the main content.Label: Indicates the presence (True) or absence (False) of bias. The label is ternary - highly biased, slightly biased and neutralToxicity: Indicates the presence (True) or absence (False) of bias.Identity_mention: Mention of any identity based on words match.Annotation SchemeThe labels and annotations in the dataset are generated through a system of Active Learning, cycling through:Manual LabelingSemi-Supervised LearningHuman VerificationThe scheme comprises:Bias Label: Specifies the degree of bias (e.g., no bias, mild, or strong).Words/Phrases Level Biases: Pinpoints specific biased terms or phrases.Subjective Bias (Aspect): Highlights biases pertinent to content dimensions.Due to the nuances of semantic match algorithms, certain labels such as 'identity' and 'aspect' may appear distinctively different.List of datasets used : We curated different news categories like Climate crisis news summaries , occupational, spiritual/faith/ general using RSS to capture different dimensions of the news media biases. The annotation is performed using active learning to label the sentence (either neural/ slightly biased/ highly biased) and to pick biased words from the news.We also utilize publicly available data from the following links. Our Attribution to others.MBIC (media bias): Spinde, Timo, Lada Rudnitckaia, Kanishka Sinha, Felix Hamborg, Bela Gipp, and Karsten Donnay. "MBIC--A Media Bias Annotation Dataset Including Annotator Characteristics." arXiv preprint arXiv:2105.11910 (2021). https://zenodo.org/records/4474336Hyperpartisan news: Kiesel, Johannes, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. "Semeval-2019 task 4: Hyperpartisan news detection." In Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 829-839. 2019. https://huggingface.co/datasets/hyperpartisan_news_detectionToxic comment classification: Adams, C.J., Jeffrey Sorensen, Julia Elliott, Lucas Dixon, Mark McDonald, Nithum, and Will Cukierski. 2017. "Toxic Comment Classification Challenge." Kaggle. https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge.Jigsaw Unintended Bias: Adams, C.J., Daniel Borkan, Inversion, Jeffrey Sorensen, Lucas Dixon, Lucy Vasserman, and Nithum. 2019. "Jigsaw Unintended Bias in Toxicity Classification." Kaggle. https://kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification.Age Bias : Díaz, Mark, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. "Addressing age-related bias in sentiment analysis." In Proceedings of the 2018 chi conference on human factors in computing systems, pp. 1-14. 2018. Age Bias Training and Testing Data - Age Bias and Sentiment Analysis Dataverse (harvard.edu)Multi-dimensional news Ukraine: Färber, Michael, Victoria Burkard, Adam Jatowt, and Sora Lim. "A multidimensional dataset based on crowdsourcing for analyzing and detecting news bias." In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3007-3014. 2020. https://zenodo.org/records/3885351#.ZF0KoxHMLtVSocial biases: Sap, Maarten, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. "Social bias frames: Reasoning about social and power implications of language." arXiv preprint arXiv:1911.03891 (2019). https://maartensap.com/social-bias-frames/Goal of this dataset :We want to offer open and free access to dataset, ensuring a wide reach to researchers and AI practitioners across the world. The dataset should be user-friendly to use and uploading and accessing data should be straightforward, to facilitate usage.If you use this dataset, please cite us.Navigating News Narratives: A Media Bias Analysis Dataset © 2023 by Shaina Raza, Vector Institute is licensed under CC BY-NC 4.0

  5. Data and Code for: Confidence, Self-Selection and Bias in the Aggregate

    • openicpsr.org
    delimited
    Updated Mar 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Enke; Thomas Graeber; Ryan Oprea (2023). Data and Code for: Confidence, Self-Selection and Bias in the Aggregate [Dataset]. http://doi.org/10.3886/E185741V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    Mar 2, 2023
    Dataset provided by
    American Economic Associationhttp://www.aeaweb.org/
    Authors
    Benjamin Enke; Thomas Graeber; Ryan Oprea
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The influence of behavioral biases on aggregate outcomes depends in part on self-selection: whether rational people opt more strongly into aggregate interactions than biased individuals. In betting market, auction and committee experiments, we document that some errors are strongly reduced through self-selection, while others are not affected at all or even amplified. A large part of this variation is explained by differences in the relationship between confidence and performance. In some tasks, they are positively correlated, such that self-selection attenuates errors. In other tasks, rational and biased people are equally confident, such that self-selection has no effects on aggregate quantities.

  6. o

    Data from: Deconstructing Bias in Social Preferences Reveals Groupy and Not...

    • openicpsr.org
    stata
    Updated Aug 5, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rachel Kranton; Matthew Pease; Seth Sanders; Scott Heutell (2020). Deconstructing Bias in Social Preferences Reveals Groupy and Not Groupy Behavior [Dataset]. http://doi.org/10.3886/E120555V1
    Explore at:
    stataAvailable download formats
    Dataset updated
    Aug 5, 2020
    Dataset provided by
    Cornell University
    Duke University
    UPMC
    Authors
    Rachel Kranton; Matthew Pease; Seth Sanders; Scott Heutell
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2010 - 2020
    Area covered
    NC, Durham
    Description

    Group divisions are a continual feature of human history, with biases toward people’s own groups shown in both experimental and natural settings. Using a novel within-subject design, this work deconstructs group biases to find significant and robust individual differences; some individuals consistently respond to group divisions, while others do not. We examined individual behavior in two treatments in which subjects make pairwise decisions that determine own and others’ income. In a political treatment, which divided subjects into groups based on their political leanings, political party members showed more ingroup bias than Independents who professed the same political opinions. But this greater bias was also present in a minimal group treatment, showing that stronger group identification was not the driver of higher favoritism in the political setting. Analyzing individual choices across the experiment, we categorize participants as “groupy” or “not groupy,” such that groupy participants have social preferences that change for ingroup and outgroup recipients, while not-groupy participants’ preferences do not change across group context. Demonstrating further that the group identity of the recipient mattered less to their choices, strongly not-groupy subjects made allocation decisions faster. We conclude that observed ingroup biases build on a foundation of heterogeneity in individual groupiness.

  7. Data from: Diversity matters: Robustness of bias measurements in Wikidata

    • zenodo.org
    • data.niaid.nih.gov
    tsv
    Updated May 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paramita das; Sai Keerthana Karnam; Anirban Panda; Bhanu Prakash Reddy Guda; Soumya Sarkar; Animesh Mukherjee; Paramita das; Sai Keerthana Karnam; Anirban Panda; Bhanu Prakash Reddy Guda; Soumya Sarkar; Animesh Mukherjee (2023). Diversity matters: Robustness of bias measurements in Wikidata [Dataset]. http://doi.org/10.48550/arxiv.2302.14027
    Explore at:
    tsvAvailable download formats
    Dataset updated
    May 1, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Paramita das; Sai Keerthana Karnam; Anirban Panda; Bhanu Prakash Reddy Guda; Soumya Sarkar; Animesh Mukherjee; Paramita das; Sai Keerthana Karnam; Anirban Panda; Bhanu Prakash Reddy Guda; Soumya Sarkar; Animesh Mukherjee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    With the widespread use of knowledge graphs (KG) in various automated AI systems and applications, it is very important to ensure that information retrieval algorithms leveraging them are free from societal biases. Previous works have depicted biases that persist in KGs, as well as employed several metrics for measuring the biases. However, such studies lack the systematic exploration of the sensitivity of the bias measurements, through varying sources of data, or the embedding algorithms used. To address this research gap, in this work, we present a holistic analysis of bias measurement on the knowledge graph. First, we attempt to reveal data biases that surface in Wikidata for thirteen different demographics selected from seven continents. Next, we attempt to unfold the variance in the detection of biases by two different knowledge graph embedding algorithms - TransE and ComplEx. We conduct our extensive experiments on a large number of occupations sampled from the thirteen demographics with respect to the sensitive attribute, i.e., gender. Our results show that the inherent data bias that persists in KG can be altered by specific algorithm bias as incorporated by KG embedding learning algorithms. Further, we show that the choice of the state-of-the-art KG embedding algorithm has a strong impact on the ranking of biased occupations irrespective of gender. We observe that the similarity of the biased occupations across demographics is minimal which reflects the socio-cultural differences around the globe. We believe that this full-scale audit of the bias measurement pipeline will raise awareness among the community while deriving insights related to design choices of data and algorithms both and refrain from the popular dogma of ``one-size-fits-all''.

  8. T

    Replication Data for: Cognitive Bias Heterogeneity

    • dataverse.tdl.org
    Updated Aug 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Molly McNamara; Molly McNamara (2025). Replication Data for: Cognitive Bias Heterogeneity [Dataset]. http://doi.org/10.18738/T8/754FZT
    Explore at:
    text/x-r-notebook(12370), text/x-r-notebook(15773), application/x-rlang-transport(20685), text/x-r-notebook(20656)Available download formats
    Dataset updated
    Aug 15, 2025
    Dataset provided by
    Texas Data Repository
    Authors
    Molly McNamara; Molly McNamara
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This data and code can be used to replicate the main analysis for "Who Exhibits Cognitive Biases? Mapping Heterogeneity in Attention, Interpretation, and Rumination in Depression." Of note- to protect this dataset from deidentification consistent with best practices, we have removed the zip code variable and binned age. The analysis code may need to be adjusted slightly to account for this, and the results may very slightly from the ones in the manuscript as a result.

  9. h

    Dutch-Government-Data-for-Bias-detection

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Milena, Dutch-Government-Data-for-Bias-detection [Dataset]. https://huggingface.co/datasets/milenamileentje/Dutch-Government-Data-for-Bias-detection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Authors
    Milena
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Area covered
    Netherlands, Politics of the Netherlands
    Description

    milenamileentje/Dutch-Government-Data-for-Bias-detection dataset hosted on Hugging Face and contributed by the HF Datasets community

  10. f

    fdata-02-00013-g0001_Social Data: Biases, Methodological Pitfalls, and...

    • frontiersin.figshare.com
    tiff
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Olteanu; Carlos Castillo; Fernando Diaz; Emre Kıcıman (2023). fdata-02-00013-g0001_Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries.tif [Dataset]. http://doi.org/10.3389/fdata.2019.00013.s003
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Alexandra Olteanu; Carlos Castillo; Fernando Diaz; Emre Kıcıman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Social data in digital form—including user-generated content, expressed or implicit relations between people, and behavioral traces—are at the core of popular applications and platforms, driving the research agenda of many researchers. The promises of social data are many, including understanding “what the world thinks” about a social issue, brand, celebrity, or other entity, as well as enabling better decision-making in a variety of fields including public policy, healthcare, and economics. Many academics and practitioners have warned against the naïve usage of social data. There are biases and inaccuracies occurring at the source of the data, but also introduced during processing. There are methodological limitations and pitfalls, as well as ethical boundaries and unexpected consequences that are often overlooked. This paper recognizes the rigor with which these issues are addressed by different researchers varies across a wide range. We identify a variety of menaces in the practices around social data use, and organize them in a framework that helps to identify them.“For your own sanity, you have to remember that not all problems can be solved. Not all problems can be solved, but all problems can be illuminated.” –Ursula Franklin1

  11. Data from: Code and data from: Perceived and observed biases within...

    • zenodo.org
    • researchdata.edu.au
    bin, zip
    Updated May 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Allison K Shaw; Allison K Shaw; Leila Fouda; Leila Fouda; Stefano Mezzini; Stefano Mezzini; Dongmin Kim; Dongmin Kim; Nilanjan Chatterjee; Nilanjan Chatterjee; David Wolfson; David Wolfson; Briana Abrahms; Briana Abrahms; Nina Attias; Nina Attias; Christine Beardsworth; Christine Beardsworth; Roxanne Beltran; Roxanne Beltran; Sandra Binning; Sandra Binning; Kayla Blincow; Kayla Blincow; Ying-Chi Chan; Ying-Chi Chan; Emanuel A. Fronhofer; Emanuel A. Fronhofer; Arne Hegemann; Arne Hegemann; Edward Hurme; Edward Hurme; Fabiola Iannarilli; Fabiola Iannarilli; Julie Kellner; Julie Kellner; Karen D McCoy; Karen D McCoy; Kasim Rafiq; Kasim Rafiq; Marjo Saastamoinen; Marjo Saastamoinen; Ana Sequeira; Ana Sequeira; Mitchell Serota; Mitchell Serota; Petra Sumasgutner; Petra Sumasgutner; Yun Tao; Yun Tao; Martha Torstenson; Martha Torstenson; Scott Yanco; Scott Yanco; Kristina Beck; Kristina Beck; Michael Bertram; Michael Bertram; Larissa Teresa Beumer; Larissa Teresa Beumer; Maja Bradarić; Maja Bradarić; Jeanne Clermont; Jeanne Clermont; Diego Ellis Soto; Diego Ellis Soto; Monika Faltusová; Monika Faltusová; John Fieberg; John Fieberg; Richard Hall; Richard Hall; Andrea Kölzsch; Andrea Kölzsch; Sandra Lai; Sandra Lai; Larisa Lee-Cruz; Larisa Lee-Cruz; Matthias-Claudio Loretto; Matthias-Claudio Loretto; Alexandra Loveridge; Alexandra Loveridge; Marcus Michelangeli; Marcus Michelangeli; Thomas Mueller; Thomas Mueller; Louise Riotte-Lambert; Louise Riotte-Lambert; Nir Sapir; Nir Sapir; Martina Scacco; Martina Scacco; Claire S. Teitelbaum; Claire S. Teitelbaum; Francesca Cagnacci; Francesca Cagnacci (2025). Code and data from: Perceived and observed biases within scientific communities: a case study in movement ecology [Dataset]. http://doi.org/10.5281/zenodo.15481349
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    May 21, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Allison K Shaw; Allison K Shaw; Leila Fouda; Leila Fouda; Stefano Mezzini; Stefano Mezzini; Dongmin Kim; Dongmin Kim; Nilanjan Chatterjee; Nilanjan Chatterjee; David Wolfson; David Wolfson; Briana Abrahms; Briana Abrahms; Nina Attias; Nina Attias; Christine Beardsworth; Christine Beardsworth; Roxanne Beltran; Roxanne Beltran; Sandra Binning; Sandra Binning; Kayla Blincow; Kayla Blincow; Ying-Chi Chan; Ying-Chi Chan; Emanuel A. Fronhofer; Emanuel A. Fronhofer; Arne Hegemann; Arne Hegemann; Edward Hurme; Edward Hurme; Fabiola Iannarilli; Fabiola Iannarilli; Julie Kellner; Julie Kellner; Karen D McCoy; Karen D McCoy; Kasim Rafiq; Kasim Rafiq; Marjo Saastamoinen; Marjo Saastamoinen; Ana Sequeira; Ana Sequeira; Mitchell Serota; Mitchell Serota; Petra Sumasgutner; Petra Sumasgutner; Yun Tao; Yun Tao; Martha Torstenson; Martha Torstenson; Scott Yanco; Scott Yanco; Kristina Beck; Kristina Beck; Michael Bertram; Michael Bertram; Larissa Teresa Beumer; Larissa Teresa Beumer; Maja Bradarić; Maja Bradarić; Jeanne Clermont; Jeanne Clermont; Diego Ellis Soto; Diego Ellis Soto; Monika Faltusová; Monika Faltusová; John Fieberg; John Fieberg; Richard Hall; Richard Hall; Andrea Kölzsch; Andrea Kölzsch; Sandra Lai; Sandra Lai; Larisa Lee-Cruz; Larisa Lee-Cruz; Matthias-Claudio Loretto; Matthias-Claudio Loretto; Alexandra Loveridge; Alexandra Loveridge; Marcus Michelangeli; Marcus Michelangeli; Thomas Mueller; Thomas Mueller; Louise Riotte-Lambert; Louise Riotte-Lambert; Nir Sapir; Nir Sapir; Martina Scacco; Martina Scacco; Claire S. Teitelbaum; Claire S. Teitelbaum; Francesca Cagnacci; Francesca Cagnacci
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This contains model code and data from the paper titled
    "Perceived and observed biases within scientific communities: a case study in movement ecology"

    By: Shaw AK, Fouda L, Mezzini S, Kim D, Chatterjee N, Wolfson D, Abrahms B, Attias N, Beardsworth CE, Beltran R, Binning SA, Blincow KM, Chan Y-C, Fronhofer EA, Hegemann A, Hurme ER, Iannarilli F, Kellner JB, McCoy KD, Rafiq K, Saastamoinen M, Sequeira AMM, Serota MW, Sumasgutner P, Tao Y, Torstenson M, Yanco SW, Beck KB, Bertram MG, Beumer LT, Bradarić M, Clermont J, Ellis-Soto D, Faltusová M, Fieberg J, Hall RJ, Kölzsch A, Lai S, Lee-Cruz L, Loretto M-C, Loveridge A, Michelangeli M, Mueller T, Riotte-Lambert L, Sapir N, Scacco M, Teitelbaum CS, Cagnacci F

    Published in: Proceedings of the Royal Society B
    Abstract:

    Who conducts biological research, where, and how results are disseminated varies among geographies and identities. Identifying and documenting these forms of bias by research communities is a critical step towards addressing them. We documented perceived and observed biases in movement ecology, a rapidly expanding sub-discipline of biology, which is strongly underpinned by fieldwork and technology use. We surveyed attendees before an international conference to assess a baseline within-discipline perceived bias (uninformed perceived bias). We analysed geographic patterns in Movement Ecology articles, finding discrepancies between the country of the authors’ affiliation and study site location, related to national economics. We analysed race-gender identities of USA biology researchers (the closest-to-our-sub-discipline with data available), finding that they differed from national demographics. Finally, we discussed the quantitatively-observed bias at the conference, to assess within-discipline perceived bias informed with observational data (informed perceived bias). Although the survey indicated most conference participants as bias-aware, conversations only covered a subset of biases. We discuss potential causes of bias (parachute-science, fieldwork accessibility), solutions, and the need to evaluate mitigatory action effectiveness. Undertaking data-driven analysis of bias within sub-disciplines can help identify specific barriers and move towards the inclusion of a greater diversity of participants in the scientific process.

  12. Data from: Bias, Randomness, and Blind-Faith: Large Language Model Code...

    • zenodo.org
    Updated Jan 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous; Anonymous (2025). Bias, Randomness, and Blind-Faith: Large Language Model Code Generation and Security Analysis [Dataset]. http://doi.org/10.5281/zenodo.14714787
    Explore at:
    Dataset updated
    Jan 21, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous; Anonymous
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Testing bias in code generation and security in large language models (LLMs) ChatGPT, Claude, and Gemini. This data is accompanying a paper submitted to USENIX '25. In experimentation, three trials were completed, each testing five different categories.

    The file Trial Charts.zip includes the trial model versions, final results from each category and test, as well as results from manual analysis. Each bias has its own .csv (Sex, Age, Race & Ethnicity, Experience, Special Circumstances) for every trial's results and another file with the first letter of the bias + Overall (i.e. A Overall for Age, SC Overall for Special Circumstances) for overall results of the category. The files labeled Manual Analysis (or Manual A.) highlight some of the differences between the biased code and the control code. It also includes notable outputs from each bias.

    The Trial Results.zip includes all of the raw data from testing the biases. The data was collected in OneNote. The files "Trial [1-3]" include all outputs from all categories. The files "Trial [1-3] Retests" include any non-control retests. The files "[1-3] Control Retests" give the results of retesting the control for all three trials. The files "[1-3] A and B Tests" show the results from resting with the new labels "a" and "b".

  13. f

    Data_Sheet_1_Gender Bias in Artificial Intelligence: Severity Prediction at...

    • frontiersin.figshare.com
    docx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Heewon Chung; Chul Park; Wu Seong Kang; Jinseok Lee (2023). Data_Sheet_1_Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19.docx [Dataset]. http://doi.org/10.3389/fphys.2021.778720.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Frontiers
    Authors
    Heewon Chung; Chul Park; Wu Seong Kang; Jinseok Lee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical records of coronavirus disease (COVID-19) patients. For 5,601 confirmed COVID-19 patients, we used 37 medical records, namely, basic patient information, physical index, initial examination findings, clinical findings, comorbidity diseases, and general blood test results at an early stage. To investigate the gender-based AI model bias, we trained and evaluated two separate models—one that was trained using only the male group, and the other using only the female group. When the model trained by the male-group data was applied to the female testing data, the overall accuracy decreased—sensitivity from 0.93 to 0.86, specificity from 0.92 to 0.86, accuracy from 0.92 to 0.86, balanced accuracy from 0.93 to 0.86, and area under the curve (AUC) from 0.97 to 0.94. Similarly, when the model trained by the female-group data was applied to the male testing data, once again, the overall accuracy decreased—sensitivity from 0.97 to 0.90, specificity from 0.96 to 0.91, accuracy from 0.96 to 0.91, balanced accuracy from 0.96 to 0.90, and AUC from 0.97 to 0.95. Furthermore, when we evaluated each gender-dependent model with the test data from the same gender used for training, the resultant accuracy was also lower than that from the unbiased model.

  14. NewsMediaBias-Plus Dataset

    • zenodo.org
    • huggingface.co
    bin, zip
    Updated Nov 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaina Raza; Shaina Raza (2024). NewsMediaBias-Plus Dataset [Dataset]. http://doi.org/10.5281/zenodo.13961155
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Nov 29, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Shaina Raza; Shaina Raza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    NewsMediaBias-Plus Dataset

    Overview

    The NewsMediaBias-Plus dataset is designed for the analysis of media bias and disinformation by combining textual and visual data from news articles. It aims to support research in detecting, categorizing, and understanding biased reporting in media outlets.

    Dataset Description

    NewsMediaBias-Plus pairs news articles with relevant images and annotations indicating perceived biases and the reliability of the content. It adds a multimodal dimension for bias detection in news media.

    Contents

    • unique_id: Unique identifier for each news item. Each unique_id matches an image for the same article.
    • outlet: The publisher of the article.
    • headline: The headline of the article.
    • article_text: The full content of the news article.
    • image_description: Description of the paired image.
    • image: The file path of the associated image.
    • date_published: The date the article was published.
    • source_url: The original URL of the article.
    • canonical_link: The canonical URL of the article.
    • new_categories: Categories assigned to the article.
    • news_categories_confidence_scores: Confidence scores for each category.

    Annotation Labels

    • text_label: Indicates the likelihood of the article being disinformation:

      • Likely: Likely to be disinformation.
      • Unlikely: Unlikely to be disinformation.
    • multimodal_label: Indicates the likelihood of disinformation from the combination of the text snippet and image content:

      • Likely: Likely to be disinformation.
      • Unlikely: Unlikely to be disinformation.

    Getting Started

    Prerequisites

    • Python 3.6+
    • Pandas
    • Hugging Face Datasets
    • Hugging Face Hub

    Installation

    Load the dataset into Python:

    python
    Copy code
    from datasets import load_dataset ds = load_dataset("vector-institute/newsmediabias-plus") print(ds) # View structure and splits print(ds['train'][0]) # Access the first record of the train split print(ds['train'][:5]) # Access the first five records

    Load a Few Records

    python
    Copy code
    from datasets import load_dataset # Load the dataset in streaming mode streamed_dataset = load_dataset("vector-institute/newsmediabias-plus", streaming=True) # Get an iterable dataset dataset_iterable = streamed_dataset['train'].take(5) # Print the records for record in dataset_iterable: print(record)

    Contributions

    Contributions are welcome! You can:

    • Add Data: Contribute more data points.
    • Refine Annotations: Improve annotation accuracy.
    • Share Usage Examples: Help others use the dataset effectively.

    To contribute, fork the repository and create a pull request with your changes.

    License

    This dataset is released under a non-commercial license. See the LICENSE file for more details.

    Citation

    Please cite the dataset using this BibTeX entry:

    bibtex
    Copy code
    @misc{vector_institute_2024_newsmediabias_plus, title={NewsMediaBias-Plus: A Multimodal Dataset for Analyzing Media Bias}, author={Vector Institute Research Team}, year={2024}, url={https://huggingface.co/datasets/vector-institute/newsmediabias-plus} }

    Contact

    For questions or support, contact Shaina Raza at: shaina.raza@vectorinstitute.ai

    Disclaimer and User Guidance

    Disclaimer: The labels Likely and Unlikely are based on LLM annotations and expert assessments, intended for informational use only. They should not be considered final judgments.

    Guidance: This dataset is for research purposes. Cross-reference findings with other reliable sources before drawing conclusions. The dataset aims to encourage critical thinking, not provide definitive classifications.

  15. d

    Data from: Decisions reduce sensitivity to subsequent information

    • search.dataone.org
    • dataone.org
    • +1more
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zohar Z. Bronfman; Noam Brezis; Rani Moran; Konstantinos Tsetsos; Tobias Donner; Marius Usher (2025). Decisions reduce sensitivity to subsequent information [Dataset]. http://doi.org/10.5061/dryad.40f6v
    Explore at:
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Zohar Z. Bronfman; Noam Brezis; Rani Moran; Konstantinos Tsetsos; Tobias Donner; Marius Usher
    Time period covered
    Jun 20, 2020
    Description

    Behavioural studies over half a century indicate that making categorical choices alters beliefs about the state of the world. People seem biased to confirm previous choices, and to suppress contradicting information. These choice-dependent biases imply a fundamental bound of human rationality. However, it remains unclear whether these effects extend to lower level decisions, and only little is known about the computational mechanisms underlying them. Building on the framework of sequential-sampling models of decision-making, we developed novel psychophysical protocols that enable us to dissect quantitatively how choices affect the way decision-makers accumulate additional noisy evidence. We find robust choice-induced biases in the accumulation of abstract numerical (experiment 1) and low-level perceptual (experiment 2) evidence. These biases deteriorate estimations of the mean value of the numerical sequence (experiment 1) and reduce the likelihood to revise decisions (experiment 2). Co...

  16. H

    Age Bias Training and Testing Data

    • dataverse.harvard.edu
    Updated Jul 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Diaz (2020). Age Bias Training and Testing Data [Dataset]. http://doi.org/10.7910/DVN/F6EMTS
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 10, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Mark Diaz
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Training and testing data annotated by a panel survey sample of older adults (aged 50+) and used to create a basic, maximum entropy bag-of-words sentiment classifier. Testing data is scraped from blog posts discussing aging authored by older adults (see published work: https://doi.org/10.1145/3173574.3173986). Training data is a re-annotated subset of the Sentiment140 training data set containing the strings "old" and "young" (see Sentiment140: http://help.sentiment140.com/for-students). For model building, the subset was re-introduce into the full data set, replacing the original annotations.

  17. l

    Data - Biases in the metabarcoding of plant pathogens - Dataset - DataStore

    • datastore.landcareresearch.co.nz
    Updated Dec 13, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Data - Biases in the metabarcoding of plant pathogens - Dataset - DataStore [Dataset]. https://datastore.landcareresearch.co.nz/dataset/biases-in-the-metabarcoding-of-plant-pathogens
    Explore at:
    Dataset updated
    Dec 13, 2018
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    We investigated and analysed the causes of differences between next-generation sequencing metabarcoding approaches and traditional DNA cloning in the detection and quantification of recognized species of rust fungi from environmental samples. The data support this article: Makiola A, Dickie IA, Holdaway RJ, Wood JR, Orwin KH, Lee CK, Glare TR. 2018. Biases in the metabarcoding of plant pathogens using rust fungi as a model system. MicrobiologyOpen. The resources (data files) represent the raw sequence data for analysis supporting this manuscript. Leaf samples from 30 sites were collected and analysed using Illumina MiSeq (folder ‘Illumina’), Ion Torrent PGM (file ‘IonTorrent.fastq’), cloning followed by Sanger sequencing (file ‘CloningSanger.fna’). The ‘barcodes.csv’ file contains the barcode names and the corresponding sites.

  18. h

    political-bias

    • huggingface.co
    Updated May 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Jones (2024). political-bias [Dataset]. https://huggingface.co/datasets/cajcodes/political-bias
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 20, 2024
    Authors
    Christopher Jones
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Political Bias Dataset

      Overview
    

    The Political Bias dataset contains 658 synthetic statements, each annotated with a bias rating ranging from 0 to 4. These ratings represent a spectrum from highly conservative (0) to highly liberal (4). The dataset was generated using GPT-4, aiming to facilitate research and development in bias detection and reduction in textual data. Special emphasis was placed on distinguishing between moderate biases on both sides, as this has proven to… See the full description on the dataset page: https://huggingface.co/datasets/cajcodes/political-bias.

  19. data bias corr

    • kaggle.com
    Updated Mar 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tyur muthia (2022). data bias corr [Dataset]. https://www.kaggle.com/tyurmuthia/data-bias-corr/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 11, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    tyur muthia
    Description

    Dataset

    This dataset was created by tyur muthia

    Contents

  20. Data-self-contruals and cognitive biases.sav

    • figshare.com
    tar
    Updated Mar 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jing Li (2024). Data-self-contruals and cognitive biases.sav [Dataset]. http://doi.org/10.6084/m9.figshare.25391464.v1
    Explore at:
    tarAvailable download formats
    Dataset updated
    Mar 12, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Jing Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This study explores the role of self-construal in cognitive biases among Chinese college students, using structural equation modeling on data from 748 undergraduates in China. It finds that independent self-construal is linked to positive cognitive bias, while interdependent self-construal shows a balanced response to positive and negative stimuli. Key mediators like attentional control, self-esteem, cognitive reappraisal, and need to belong were identified, highlighting the importance of self-construal in information processing and mental health. The research supports culturally tailored mental health interventions and suggests expanding this inquiry to other cultures with longitudinal studies.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Vithya Yogarajan; Gillian Dobbie; Sharon Leitch; Te Taka Keegan; Joshua Bensemann; Michael Witbrock; Varsha Asrani; David Reith (2023). Data_Sheet_1_Data and model bias in artificial intelligence for healthcare applications in New Zealand.zip [Dataset]. http://doi.org/10.3389/fcomp.2022.1070493.s001

Data_Sheet_1_Data and model bias in artificial intelligence for healthcare applications in New Zealand.zip

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Jun 3, 2023
Dataset provided by
Frontiers
Authors
Vithya Yogarajan; Gillian Dobbie; Sharon Leitch; Te Taka Keegan; Joshua Bensemann; Michael Witbrock; Varsha Asrani; David Reith
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Area covered
New Zealand
Description

IntroductionDevelopments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand.MethodsThis research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Māori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Māori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME.ResultsThis research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research.DiscussionsThis research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare.

Search
Clear search
Close search
Google apps
Main menu