100+ datasets found
  1. T

    Replication Data for: Cognitive Bias Heterogeneity

    • dataverse.tdl.org
    Updated Aug 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Molly McNamara; Molly McNamara (2025). Replication Data for: Cognitive Bias Heterogeneity [Dataset]. http://doi.org/10.18738/T8/754FZT
    Explore at:
    text/x-r-notebook(12370), text/x-r-notebook(15773), application/x-rlang-transport(20685), text/x-r-notebook(20656)Available download formats
    Dataset updated
    Aug 15, 2025
    Dataset provided by
    Texas Data Repository
    Authors
    Molly McNamara; Molly McNamara
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This data and code can be used to replicate the main analysis for "Who Exhibits Cognitive Biases? Mapping Heterogeneity in Attention, Interpretation, and Rumination in Depression." Of note- to protect this dataset from deidentification consistent with best practices, we have removed the zip code variable and binned age. The analysis code may need to be adjusted slightly to account for this, and the results may very slightly from the ones in the manuscript as a result.

  2. Data bias

    • kaggle.com
    zip
    Updated Mar 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tyur muthia (2022). Data bias [Dataset]. https://www.kaggle.com/datasets/tyurmuthia/data-bias
    Explore at:
    zip(654062 bytes)Available download formats
    Dataset updated
    Mar 11, 2022
    Authors
    tyur muthia
    Description

    Dataset

    This dataset was created by tyur muthia

    Contents

  3. f

    What is Algorithmic Bias and why is it bad? - AI Ethics in a Nutshell

    • meta4ds.fokus.fraunhofer.de
    html
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NFDI for Data Science and AI, What is Algorithmic Bias and why is it bad? - AI Ethics in a Nutshell [Dataset]. https://meta4ds.fokus.fraunhofer.de/datasets/nredoam6ctc?locale=en
    Explore at:
    htmlAvailable download formats
    Dataset authored and provided by
    NFDI for Data Science and AI
    Description

    We often hear that AI systems tend to be "unfair" or "biased". But when do we speak of bias? Why is it there? And, are we doing enough to prevent it?

    This video is a short excerpt of our series Conversations on AI Ethics, in which more details about bias, trustworthy and other related issues are discussed (https://youtube.com/playlist?list=PLiv4TocTZt7NIu58pguXJK4hjeesIkS-N&feature=shared).

    You liked the video? Make sure to give us a thumbs-up! Share and subscribe for more content of this type!

  4. News Bias Data

    • kaggle.com
    zip
    Updated Apr 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nitish Kumar Thakur (2025). News Bias Data [Dataset]. https://www.kaggle.com/datasets/nitishxthakur/news-bias-data/data
    Explore at:
    zip(367303570 bytes)Available download formats
    Dataset updated
    Apr 8, 2025
    Authors
    Nitish Kumar Thakur
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    The prevalence of bias in the news media has become a critical issue, affecting public perception on a range of important topics such as political views, health, insurance, resource distributions, religion, race, age, gender, occupation, and climate change. The media has a moral responsibility to ensure accurate information dissemination and to increase awareness about important issues and the potential risks associated with them. This highlights the need for a solution that can help mitigate against the spread of false or misleading information and restore public trust in the media.

    Data description: This is a dataset for news media bias covering different dimensions of the biases: political, hate speech, political, toxicity, sexism, ageism, gender identity, gender discrimination, race/ethnicity, climate change, occupation, spirituality, which makes it a unique contribution. The dataset used for this project does not contain any personally identifiable information (PII).

    Data Format: The format of data is:

    ID: Numeric unique identifier. Text: Main content. Dimension: Categorical descriptor of the text. Biased_Words: List of words considered biased. Aspect: Specific topic within the text. Label: Neutral, Slightly Biased , Highly Biased

    Annotation Scheme: The annotation scheme is based on Active learning, which is Manual Labeling --> Semi-Supervised Learning --> Human Verifications (iterative process)

    Bias Label: Indicate the presence/absence of bias (e.g., no bias, mild, strong). Words/Phrases Level Biases: Identify specific biased words/phrases. Subjective Bias (Aspect): Capture biases related to content aspects.

  5. d

    Replication Data for: Reducing Political Bias in Political Science Estimates...

    • dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zigerell, Lawrence (2023). Replication Data for: Reducing Political Bias in Political Science Estimates [Dataset]. http://doi.org/10.7910/DVN/PZLCJM
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Zigerell, Lawrence
    Description

    Political science researchers have flexibility in how to analyze data, how to report data, and whether to report on data. Review of examples of reporting flexibility from the race and sex discrimination literature illustrates how research design choices can influence estimates and inferences. This reporting flexibility—coupled with the political imbalance among political scientists—creates the potential for political bias in reported political science estimates, but this potential for political bias can be reduced or eliminated through preregistration and preacceptance, in which researchers commit to a research design before completing data collection. Removing the potential for reporting flexibility can raise the credibility of political science research.

  6. A

    AI Bias Audit Services Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Feb 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). AI Bias Audit Services Report [Dataset]. https://www.archivemarketresearch.com/reports/ai-bias-audit-services-18701
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Feb 11, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Market Size and Growth: The global market for AI Bias Audit Services is valued at USD 923 million in 2023 and is expected to grow at a CAGR of 6.5% from 2023 to 2033, reaching USD 1,618 million by 2033. This growth is attributed to the increasing adoption of artificial intelligence (AI) and machine learning (ML) algorithms across various industries. AI algorithms can introduce biases that can lead to unfair or discriminatory outcomes, necessitating the need for bias audits. Market Dynamics: Key drivers of the AI Bias Audit Services market include the growing awareness of AI bias, the increasing regulatory pressure to address AI fairness, and the need for organizations to ensure ethical AI development. The market is segmented by type (cloud-based and on-premise) and application (large enterprises and SMEs). North America is the dominant region in the market, followed by Europe and Asia Pacific. Prominent market players include Resolution Economics, Mosaic Data Science, Kanarys, APTMetrics, and BABL AI. The market is expected to witness continued growth as organizations prioritize mitigating AI biases and ensuring fair and responsible use of AI technology.

  7. H

    Data from: Overlooked biases from misidentifications of causal structures

    • dataverse.harvard.edu
    • search-demo.dataone.org
    Updated Mar 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simone Cenci (2024). Overlooked biases from misidentifications of causal structures [Dataset]. http://doi.org/10.7910/DVN/JMQHJN
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 5, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Simone Cenci
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Supporting code for Cenci, S. Overlooked biases from misidentifications of causal structures The Journal of Finance and Data Science (2024)

  8. f

    fdata-02-00029_Reflections on Gender Analyses of Bibliographic Corpora.pdf

    • figshare.com
    • frontiersin.figshare.com
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helena Mihaljević; Marco Tullney; Lucía Santamaría; Christian Steinfeldt (2023). fdata-02-00029_Reflections on Gender Analyses of Bibliographic Corpora.pdf [Dataset]. http://doi.org/10.3389/fdata.2019.00029.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Helena Mihaljević; Marco Tullney; Lucía Santamaría; Christian Steinfeldt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The interplay between an academic's gender and their scholarly output is a riveting topic at the intersection of scientometrics, data science, gender studies, and sociology. Its effects can be studied to analyze the role of gender in research productivity, tenure and promotion standards, collaboration and networks, or scientific impact, among others. The typical methodology in this field of research is based on a number of assumptions that are customarily not discussed in detail in the relevant literature, but undoubtedly merit a critical examination. Presumably the most confronting aspect is the categorization of gender. An author's gender is typically inferred from their name, further reduced to a binary feature by an algorithmic procedure. This and subsequent data processing steps introduce biases whose effects are hard to estimate. In this report we describe said problems and discuss the reception and interplay of this line of research within the field. We also outline the effect of obstacles, such as non-availability of data and code for transparent communication. Building on our research on gender effects on scientific publications, we challenge the prevailing methodology in the field and offer a critical reflection on some of its flaws and pitfalls. Our observations are meant to open up the discussion around the need and feasibility of more elaborated approaches to tackle gender in conjunction with analyses of bibliographic sources.

  9. d

    Replication Data for: Publication Biases in Replication Studies

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Berinsky, Adam J.; Druckman, James N.; Yamamoto, Teppei (2023). Replication Data for: Publication Biases in Replication Studies [Dataset]. http://doi.org/10.7910/DVN/BJMZNR
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Berinsky, Adam J.; Druckman, James N.; Yamamoto, Teppei
    Description

    One of the strongest findings across the sciences is that publication bias occurs. Of particular note is a “file drawer bias” where statistically significant results are privileged over non-significant results. Recognition of this bias, along with increased calls for “open science,” has led to an emphasis on replication studies. Yet, few have explored publication bias and its consequences in replication studies. We offer a model of the publication process involving an initial study and a replication. We use the model to describe three types of publication biases: 1) file drawer bias, 2) a “repeat study” bias against the publication of replication studies, and 3) a “gotcha bias” where replication results that run contrary to a prior study are more likely to be published. We estimate the model’s parameters with a vignette experiment conducted with political science professors teaching at Ph.D.-granting institutions in the United States. We find evidence of all three types of bias, although those explicitly involving replication studies are notably smaller. This bodes well for the replication movement. That said, the aggregation of all of the biases increases the number of false positives in a literature. We conclude by discussing a path for future work on publication biases.

  10. g

    EartH2Observe, WFDEI and ERA-Interim data Merged and Bias-corrected for...

    • dataservices.gfz-potsdam.de
    Updated 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Lange (2016). EartH2Observe, WFDEI and ERA-Interim data Merged and Bias-corrected for ISIMIP (EWEMBI) [Dataset]. http://doi.org/10.5880/pik.2016.004
    Explore at:
    Dataset updated
    2016
    Dataset provided by
    datacite
    GFZ Data Services
    Authors
    Stefan Lange
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    The EWEMBI dataset was compiled to support the bias correction of climate input data for the impact assessments carried out in phase 2b of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2b; Frieler et al., 2017), which will contribute to the 2018 IPCC special report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways. The EWEMBI data cover the entire globe at 0.5° horizontal and daily temporal resolution from 1979 to 2013. Data sources of EWEMBI are ERA-Interim reanalysis data (ERAI; Dee et al., 2011), WATCH forcing data methodology applied to ERA-Interim reanalysis data (WFDEI; Weedon et al., 2014), eartH2Observe forcing data (E2OBS; Calton et al., 2016) and NASA/GEWEX Surface Radiation Budget data (SRB; Stackhouse Jr. et al., 2011). The SRB data were used to bias-correct E2OBS shortwave and longwave radiation (Lange, 2018). Variables included in the EWEMBI dataset are Near Surface Relative Humidity, Near Surface Specific Humidity, Precipitation, Snowfall Flux, Surface Air Pressure, Surface Downwelling Longwave Radiation, Surface Downwelling Shortwave Radiation, Near Surface Wind Speed, Near-Surface Air Temperature, Daily Maximum Near Surface Air Temperature, Daily Minimum Near Surface Air Temperature, Eastward Near-Surface Wind and Northward Near-Surface Wind. For data sources, units and short names of all variables see Frieler et al. (2017, Table 1).

  11. d

    Data from: Citizen science can complement professional invasive plant...

    • datadryad.org
    • search.dataone.org
    • +1more
    zip
    Updated Sep 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Monica Dimson (2024). Citizen science can complement professional invasive plant surveys and improve estimates of suitable habitat [Dataset]. http://doi.org/10.5068/D1769Q
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 11, 2024
    Dataset provided by
    Dryad
    Authors
    Monica Dimson
    Time period covered
    Jun 6, 2023
    Description

    Data from: Citizen science can complement professional invasive plant surveys and improve estimates of suitable habitat

    Diversity and Distributions, 00, 1–16. https://doi.org/10.1111/ddi.13749

    Access this dataset on Dryad: https://doi.org/10.5068/D1769Q

    R Scripts and Data Tables

    Scripts

    File: bias_index.R

    Description: R script for calculating bias in: 1) all iNaturalist plant observations and 2) iNaturalist and professional observations of the 4 study species, Hedychium gardnerianum, Lantana camara, Leucaena leucocephala, and Psidium cattleianum.

    File: hsm.R

    Description: R script for: 1) producing Hedychium gardnerianum, Lantana camara, Leucaena leucocephala, and Psidium cattleianum habitat suitability models and 2) calculating overlap among model series with Schoener's D.

    Tables [biasclasses folder]

    File: area_disturb_iv.txt

    Description: Comma-delimited file containing the ...

  12. H

    Replication Data for: Bias Amplification and Bias Unmasking

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Jun 4, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joel Middleton (2016). Replication Data for: Bias Amplification and Bias Unmasking [Dataset]. http://doi.org/10.7910/DVN/UO5WQ4
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 4, 2016
    Dataset provided by
    Harvard Dataverse
    Authors
    Joel Middleton
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    In the analysis of causal effects in non-experimental studies, conditioning on observable covariates is one way to try to reduce unobserved confounder bias. However, a developing literature has shown that conditioning on certain covariates may increase bias, and the mechanisms underlying this phenomenon have not been fully explored. We add to the literature on bias-increasing covariates by first introducing a way to decompose omitted variable bias into three constituent parts: bias due to an unobserved confounder, bias due to excluding observed covariates, and bias due to amplification. This leads to two important findings. While instruments have been the primary focus of the bias amplification literature to date, we identify the fact that the popular approach of adding group fixed-effects can lead to bias amplification as well. This is an important finding because many practitioners think that fixed effects are convenient way to account for any and all group-level confounding and are at worst harmless. The second finding introduces the concept of bias unmasking and shows how it can be even more insidious than bias amplification in some cases. After introducing these new results analytically, we use constructed observational placebo studies to illustrate bias amplification and bias unmasking with real data. Finally, we propose a way to add bias decomposition information to graphical displays for sensitivity analysis to help practitioners think through the potential for bias amplification and bias unmasking in actual applications.

  13. g

    Data from: EartH2Observe, WFDEI and ERA-Interim data Merged and...

    • dataservices.gfz-potsdam.de
    Updated 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Lange (2019). EartH2Observe, WFDEI and ERA-Interim data Merged and Bias-corrected for ISIMIP (EWEMBI) [Dataset]. http://doi.org/10.5880/pik.2019.004
    Explore at:
    Dataset updated
    2019
    Dataset provided by
    datacite
    GFZ Data Services
    Authors
    Stefan Lange
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    VERSION HISTORY:- On June 26, 2018 all files were republished due to the incorporation of additional observational data covering years 2014 to 2016. Prior to that date, the dataset only covered years 1979 to 2013. Data for all years prior to 2014 are identical in this and the original version of the dataset. DATA DESCRIPTION:The EWEMBI dataset was compiled to support the bias correction of climate input data for the impact assessments carried out in phase 2b of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2b; Frieler et al., 2017), which will contribute to the 2018 IPCC special report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways. The EWEMBI data cover the entire globe at 0.5° horizontal and daily temporal resolution from 1979 to 2013. Data sources of EWEMBI are ERA-Interim reanalysis data (ERAI; Dee et al., 2011), WATCH forcing data methodology applied to ERA-Interim reanalysis data (WFDEI; Weedon et al., 2014), eartH2Observe forcing data (E2OBS; Calton et al., 2016) and NASA/GEWEX Surface Radiation Budget data (SRB; Stackhouse Jr. et al., 2011). The SRB data were used to bias-correct E2OBS shortwave and longwave radiation (Lange, 2018). Variables included in the EWEMBI dataset are Near Surface Relative Humidity, Near Surface Specific Humidity, Precipitation, Snowfall Flux, Surface Air Pressure, Surface Downwelling Longwave Radiation, Surface Downwelling Shortwave Radiation, Near Surface Wind Speed, Near-Surface Air Temperature, Daily Maximum Near Surface Air Temperature, Daily Minimum Near Surface Air Temperature, Eastward Near-Surface Wind and Northward Near-Surface Wind. For data sources, units and short names of all variables see Frieler et al. (2017, Table 1).

  14. D

    Data from: Racial Bias in AI-Generated Images

    • ssh.datastations.nl
    • openicpsr.org
    Updated Aug 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Y. Yang; Y. Yang (2024). Racial Bias in AI-Generated Images [Dataset]. http://doi.org/10.17026/SS/7MQV4M
    Explore at:
    text/x-fixed-field(28980), application/x-spss-syntax(1998), tsv(45171)Available download formats
    Dataset updated
    Aug 1, 2024
    Dataset provided by
    DANS Data Station Social Sciences and Humanities
    Authors
    Y. Yang; Y. Yang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This file is supplementary material for the manuscript Racial Bias in AI-Generated Images, which has been submitted to a peer-reviewed journal. This dataset/paper examined the image-to-image generation accuracy (i.e., the original race and gender of a person’s image were replicated in the new AI-generated image) of a Chinese AI-powered image generator. We examined the image-to-image generation models transforming the racial and gender categories of the original photos of White, Black and East Asian people (N =1260) in three different racial photo contexts: a single person, two people of the same race, and two people of different races.

  15. G

    Bias Detection Platform Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Bias Detection Platform Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/bias-detection-platform-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Oct 6, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bias Detection Platform Market Outlook



    According to our latest research, the global Bias Detection Platform market size reached USD 1.42 billion in 2024, reflecting a surge in demand for advanced, ethical, and transparent decision-making tools across industries. The market is expected to grow at a CAGR of 17.8% during the forecast period, reaching a projected value of USD 6.13 billion by 2033. This robust growth is primarily driven by the increasing adoption of artificial intelligence (AI) and machine learning (ML) technologies, which has highlighted the urgent need for solutions that can identify and mitigate bias in automated systems and data-driven processes. As organizations worldwide strive for fairness, compliance, and inclusivity, bias detection platforms are becoming a cornerstone of responsible digital transformation.




    One of the key growth factors for the Bias Detection Platform market is the rapid integration of AI and ML algorithms into critical business operations. As enterprises leverage these technologies to automate decision-making in areas such as recruitment, financial services, and healthcare, the risk of unintentional bias in algorithms has become a significant concern. Regulatory bodies and industry watchdogs are increasingly mandating transparency and accountability in automated systems, prompting organizations to invest in bias detection platforms to ensure compliance and mitigate reputational risks. Furthermore, the proliferation of big data analytics has amplified the need for robust tools that can scrutinize massive datasets for hidden biases, ensuring that business insights and actions are both accurate and equitable.




    Another major driver fueling market growth is the heightened focus on diversity, equity, and inclusion (DEI) initiatives across both public and private sectors. Organizations are under mounting pressure from stakeholders, including customers, investors, and employees, to demonstrate their commitment to fair and unbiased practices. Bias detection platforms are being deployed to audit hiring processes, marketing campaigns, lending decisions, and other critical workflows, helping organizations identify and rectify discriminatory patterns. The increasing availability of advanced software and services that can seamlessly integrate with existing IT infrastructure is further accelerating adoption, making bias detection accessible to enterprises of all sizes.




    The evolution of regulatory frameworks and ethical standards around AI and data usage is also acting as a catalyst for market expansion. Governments and international bodies are introducing stringent guidelines to govern the ethical use of AI, with a particular emphasis on eliminating bias and ensuring fairness. This regulatory momentum is compelling organizations to adopt proactive measures, including the implementation of bias detection platforms, to avoid legal liabilities and maintain public trust. Additionally, the growing awareness of the social and economic consequences of biased systems is encouraging a broader range of industries to prioritize bias detection as a core component of their risk management and governance strategies.




    From a regional perspective, North America continues to dominate the Bias Detection Platform market, accounting for the largest share of global revenue in 2024. This leadership is attributed to the region’s early adoption of AI technologies, strong regulatory oversight, and a high concentration of technology-driven enterprises. Europe follows closely, benefiting from progressive data protection laws and a robust emphasis on ethical AI. Meanwhile, the Asia Pacific region is emerging as a high-growth market, driven by rapid digitalization, expanding IT infrastructure, and increasing awareness of bias-related challenges in diverse sectors. Latin America and the Middle East & Africa are also witnessing steady growth, supported by rising investments in digital transformation and regulatory advancements.





    Component Analysis



    The Bias Detection Platform market is

  16. T

    Data from: AI and Bias

    • dataverse-training.tdl.org
    jpeg
    Updated Aug 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jane Scott; Jane Scott (2025). AI and Bias [Dataset]. http://doi.org/10.33536/FK2/4LGVFR
    Explore at:
    jpeg(354152)Available download formats
    Dataset updated
    Aug 18, 2025
    Dataset provided by
    Texas Data Repository ***TRAINING*** Dataverse
    Authors
    Jane Scott; Jane Scott
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is a test entry and I am making an edit to change the version.

  17. m

    Data from: Prolific observer bias in the life sciences: why we need blind...

    • figshare.mq.edu.au
    • datasetcatalog.nlm.nih.gov
    • +4more
    bin
    Updated Jun 14, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luke Holman; Megan L. Head; Robert Lanfear; Michael D. Jennions (2023). Data from: Prolific observer bias in the life sciences: why we need blind data recording [Dataset]. http://doi.org/10.5061/dryad.hn40n
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 14, 2023
    Dataset provided by
    Macquarie University
    Authors
    Luke Holman; Megan L. Head; Robert Lanfear; Michael D. Jennions
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Observer bias and other “experimenter effects” occur when researchers’ expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work “blind,” meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

    Usage Notes Evolution literature review dataExact p value datasetjournal_categoriesp values data 24 SeptProportion of significant p values per paperR script to filter and classify the p value dataQuiz answers - guessing effect size from abstractsThe answers provided by the 9 evolutionary biologists to quiz we designed, which aimed to test whether trained specialists are able to infer the relative size/direction of effect size from a paper's title and abstract.readmeDescription of the contents of all the other files in this Dryad submission.R script to statistically analyse the p value dataR script detailing the statistical analyses we performed on the p value datasets.

  18. H

    Replication data for: Testing for Publication Bias in Political Science

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Mar 4, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alan Gerber; Donald Green; David Nickerson (2010). Replication data for: Testing for Publication Bias in Political Science [Dataset]. http://doi.org/10.7910/DVN/DQC9KV
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 4, 2010
    Dataset provided by
    Harvard Dataverse
    Authors
    Alan Gerber; Donald Green; David Nickerson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    If the publication decisions of journals are a function of the statistical significance of research findings, the published literature may suffer from “publication bias.” This paper describes a method for detecting publication bias. We point out that to achieve statistical significance, the effect size must be larger in small samples. If publications tend to be biased against statistically insignificant results, we should observe that the effect size diminishes as sample sizes increase. This proposition is tested and confirmed using the experimental literature on voter mobilization.

  19. cleaned tox bias

    • kaggle.com
    zip
    Updated Apr 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ilya Evenbach (2019). cleaned tox bias [Dataset]. https://www.kaggle.com/iluxave/cleaned-tox-bias
    Explore at:
    zip(225747122 bytes)Available download formats
    Dataset updated
    Apr 9, 2019
    Authors
    Ilya Evenbach
    Description

    Dataset

    This dataset was created by Ilya Evenbach

    Contents

  20. G

    Appraisal Bias Detection Analytics Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Appraisal Bias Detection Analytics Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/appraisal-bias-detection-analytics-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Appraisal Bias Detection Analytics Market Outlook



    According to our latest research, the global Appraisal Bias Detection Analytics market size in 2024 stands at USD 1.12 billion, with the market expected to grow at a robust CAGR of 15.7% from 2025 to 2033. This significant growth trajectory will see the market reach approximately USD 4.17 billion by 2033. The rising demand for transparency and fairness in property valuation, combined with increased regulatory scrutiny and the adoption of advanced analytics technologies, are the primary growth drivers for this market.




    One of the most substantial growth factors for the Appraisal Bias Detection Analytics market is the mounting pressure from regulatory authorities to address systemic biases in property and mortgage appraisals. Governments and regulatory bodies, particularly in North America and Europe, have enacted stringent guidelines to ensure equitable lending and valuation practices. These mandates have compelled banks, financial institutions, and real estate agencies to invest in robust analytics solutions capable of identifying and mitigating appraisal bias. Additionally, the proliferation of high-profile lawsuits and investigations into discriminatory valuation practices has further accelerated the adoption of appraisal bias detection analytics, as organizations seek to protect their reputations and avoid costly penalties.




    Another key factor propelling market expansion is the rapid advancement in artificial intelligence (AI), machine learning (ML), and data analytics technologies. Modern appraisal bias detection analytics platforms leverage these technologies to analyze vast datasets, uncover patterns of bias, and provide actionable insights. These platforms are increasingly being integrated with existing property valuation and mortgage lending systems, streamlining workflow and enabling real-time bias detection. The growing availability of cloud-based solutions has also democratized access to advanced analytics, making it feasible for small and medium-sized enterprises (SMEs) to implement bias detection tools without significant upfront investment in IT infrastructure. This technological evolution is expected to continue fueling market growth throughout the forecast period.




    Moreover, the heightened focus on diversity, equity, and inclusion (DEI) initiatives across industries has played a pivotal role in driving the adoption of appraisal bias detection analytics. Organizations are under increasing pressure from stakeholders, including investors, customers, and advocacy groups, to demonstrate their commitment to fair and unbiased practices. By implementing sophisticated analytics solutions, companies can proactively identify and address potential sources of bias, thereby fostering trust and credibility among their stakeholders. This trend is particularly pronounced in the real estate and banking sectors, where transparency and fairness are critical to maintaining customer confidence and regulatory compliance.




    From a regional perspective, North America remains the dominant market for appraisal bias detection analytics, accounting for the largest share of global revenues in 2024. The region's leadership is underpinned by a combination of stringent regulatory frameworks, high adoption rates of advanced technologies, and a mature real estate and mortgage lending ecosystem. Europe follows closely, driven by similar regulatory imperatives and a strong focus on social equity. Meanwhile, the Asia Pacific region is emerging as a high-growth market, fueled by rapid urbanization, expanding financial services, and increasing awareness of the need for unbiased appraisal practices. Latin America and the Middle East & Africa are also witnessing growing adoption, albeit from a smaller base, as regulatory standards evolve and digital transformation accelerates across the real estate and financial sectors.





    Component Analysis



    The Appraisal Bias Detection Analytics market is segmented by component into software and servic

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Molly McNamara; Molly McNamara (2025). Replication Data for: Cognitive Bias Heterogeneity [Dataset]. http://doi.org/10.18738/T8/754FZT

Replication Data for: Cognitive Bias Heterogeneity

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
text/x-r-notebook(12370), text/x-r-notebook(15773), application/x-rlang-transport(20685), text/x-r-notebook(20656)Available download formats
Dataset updated
Aug 15, 2025
Dataset provided by
Texas Data Repository
Authors
Molly McNamara; Molly McNamara
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

This data and code can be used to replicate the main analysis for "Who Exhibits Cognitive Biases? Mapping Heterogeneity in Attention, Interpretation, and Rumination in Depression." Of note- to protect this dataset from deidentification consistent with best practices, we have removed the zip code variable and binned age. The analysis code may need to be adjusted slightly to account for this, and the results may very slightly from the ones in the manuscript as a result.

Search
Clear search
Close search
Google apps
Main menu