100+ datasets found
  1. H

    Replication data for: Selection Bias in Comparative Research: The Case of...

    • dataverse.harvard.edu
    Updated Mar 8, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Hug (2010). Replication data for: Selection Bias in Comparative Research: The Case of Incomplete Data Sets [Dataset]. http://doi.org/10.7910/DVN/QO28VG
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 8, 2010
    Dataset provided by
    Harvard Dataverse
    Authors
    Simon Hug
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Selection bias is an important but often neglected problem in comparative research. While comparative case studies pay some attention to this problem, this is less the case in broader cross-national studies, where this problem may appear through the way the data used are generated. The article discusses three examples: studies of the success of newly formed political parties, research on protest events, and recent work on ethnic conflict. In all cases the data at hand are likely to be afflicted by selection bias. Failing to take into consideration this problem leads to serious biases in the estimation of simple relationships. Empirical examples illustrate a possible solution (a variation of a Tobit model) to the problems in these cases. The article also discusses results of Monte Carlo simulations, illustrating under what conditions the proposed estimation procedures lead to improved results.

  2. Z

    Data from: Qbias – A Dataset on Media Bias in Search Queries and Query...

    • data.niaid.nih.gov
    Updated Mar 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haak, Fabian; Schaer, Philipp (2023). Qbias – A Dataset on Media Bias in Search Queries and Query Suggestions [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7682914
    Explore at:
    Dataset updated
    Mar 1, 2023
    Dataset provided by
    Technische Hochschule Köln
    Authors
    Haak, Fabian; Schaer, Philipp
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present Qbias, two novel datasets that promote the investigation of bias in online news search as described in

    Fabian Haak and Philipp Schaer. 2023. 𝑄𝑏𝑖𝑎𝑠 - A Dataset on Media Bias in Search Queries and Query Suggestions. In Proceedings of ACM Web Science Conference (WebSci’23). ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3578503.3583628.

    Dataset 1: AllSides Balanced News Dataset (allsides_balanced_news_headlines-texts.csv)

    The dataset contains 21,747 news articles collected from AllSides balanced news headline roundups in November 2022 as presented in our publication. The AllSides balanced news feature three expert-selected U.S. news articles from sources of different political views (left, right, center), often featuring spin bias, and slant other forms of non-neutral reporting on political news. All articles are tagged with a bias label by four expert annotators based on the expressed political partisanship, left, right, or neutral. The AllSides balanced news aims to offer multiple political perspectives on important news stories, educate users on biases, and provide multiple viewpoints. Collected data further includes headlines, dates, news texts, topic tags (e.g., "Republican party", "coronavirus", "federal jobs"), and the publishing news outlet. We also include AllSides' neutral description of the topic of the articles. Overall, the dataset contains 10,273 articles tagged as left, 7,222 as right, and 4,252 as center.

    To provide easier access to the most recent and complete version of the dataset for future research, we provide a scraping tool and a regularly updated version of the dataset at https://github.com/irgroup/Qbias. The repository also contains regularly updated more recent versions of the dataset with additional tags (such as the URL to the article). We chose to publish the version used for fine-tuning the models on Zenodo to enable the reproduction of the results of our study.

    Dataset 2: Search Query Suggestions (suggestions.csv)

    The second dataset we provide consists of 671,669 search query suggestions for root queries based on tags of the AllSides biased news dataset. We collected search query suggestions from Google and Bing for the 1,431 topic tags, that have been used for tagging AllSides news at least five times, approximately half of the total number of topics. The topic tags include names, a wide range of political terms, agendas, and topics (e.g., "communism", "libertarian party", "same-sex marriage"), cultural and religious terms (e.g., "Ramadan", "pope Francis"), locations and other news-relevant terms. On average, the dataset contains 469 search queries for each topic. In total, 318,185 suggestions have been retrieved from Google and 353,484 from Bing.

    The file contains a "root_term" column based on the AllSides topic tags. The "query_input" column contains the search term submitted to the search engine ("search_engine"). "query_suggestion" and "rank" represents the search query suggestions at the respective positions returned by the search engines at the given time of search "datetime". We scraped our data from a US server saved in "location".

    We retrieved ten search query suggestions provided by the Google and Bing search autocomplete systems for the input of each of these root queries, without performing a search. Furthermore, we extended the root queries by the letters a to z (e.g., "democrats" (root term) >> "democrats a" (query input) >> "democrats and recession" (query suggestion)) to simulate a user's input during information search and generate a total of up to 270 query suggestions per topic and search engine. The dataset we provide contains columns for root term, query input, and query suggestion for each suggested query. The location from which the search is performed is the location of the Google servers running Colab, in our case Iowa in the United States of America, which is added to the dataset.

    AllSides Scraper

    At https://github.com/irgroup/Qbias, we provide a scraping tool, that allows for the automatic retrieval of all available articles at the AllSides balanced news headlines.

    We want to provide an easy means of retrieving the news and all corresponding information. For many tasks it is relevant to have the most recent documents available. Thus, we provide this Python-based scraper, that scrapes all available AllSides news articles and gathers available information. By providing the scraper we facilitate access to a recent version of the dataset for other researchers.

  3. r

    Data from: Data : Heuristics and Biases in Home Care Package Resource...

    • researchdata.edu.au
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Professor Tracy Comans; Professor Tracy Comans (2025). Data : Heuristics and Biases in Home Care Package Resource Allocation [Dataset]. http://doi.org/10.48610/B2B5DE7
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    The University of Queensland
    Authors
    Professor Tracy Comans; Professor Tracy Comans
    License

    http://guides.library.uq.edu.au/deposit_your_data/terms_and_conditionshttp://guides.library.uq.edu.au/deposit_your_data/terms_and_conditions

    Description

    This dataset contains anonymised experiment data downloaded from a survey instrument. The experiment is designed to assess framing bias and the mechanism of data collection was online survey. The survey was designed in three parts: Information and consent, demographic questions, and case study vignettes. Demographic questions were identical in both forms of the survey. For the vignettes, respondents were randomised to one of two frames, with frame one presented from a medical assessment perspective (ACAT) and frame two presented from a service provider perspective There are four vignettes detailing real world choices in home-care packages, changing only the services and equipment suggested by either ACAT assessors or service providers (treatments).

  4. D

    Bias Detection As A Service Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Bias Detection As A Service Market Research Report 2033 [Dataset]. https://dataintelo.com/report/bias-detection-as-a-service-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bias Detection as a Service Market Outlook



    According to our latest research, the global Bias Detection as a Service market size reached USD 1.17 billion in 2024, propelled by the increasing demand for fairness and transparency in automated decision-making systems. The market is expected to grow at a robust CAGR of 25.6% from 2025 to 2033, reaching a forecasted size of USD 9.04 billion by 2033. This accelerated expansion is driven by growing regulatory scrutiny, heightened awareness of ethical AI, and the proliferation of AI and machine learning technologies across industries.




    One of the primary growth factors for the Bias Detection as a Service market is the exponential rise in AI adoption across critical sectors such as finance, healthcare, and government. As organizations increasingly rely on automated systems for decision-making, the risk of embedded bias in algorithms has become a pressing concern. Regulatory bodies worldwide are enacting stricter guidelines to ensure algorithmic fairness, pushing enterprises to seek advanced bias detection solutions. The need for proactive bias identification not only helps organizations comply with regulations but also safeguards their reputation and fosters consumer trust, further fueling market expansion.




    Technological advancements in machine learning and natural language processing are significantly enhancing the capabilities of bias detection platforms. Modern solutions leverage sophisticated analytics to identify, quantify, and mitigate both explicit and implicit biases in data and models. The integration of explainable AI (XAI) features is enabling stakeholders to understand and address the root causes of bias, which is especially critical in high-stakes applications like healthcare diagnostics and financial underwriting. Additionally, the growing ecosystem of cloud-based AI services is making bias detection tools more accessible to small and medium enterprises, democratizing their adoption and driving overall market growth.




    Another vital driver is the increasing public and stakeholder demand for ethical AI. High-profile incidents involving biased AI systems have drawn attention to the societal impact of algorithmic decisions, prompting organizations to prioritize fairness as a core value. This shift is evident in sectors such as recruitment, lending, and law enforcement, where biased outcomes can have severe consequences. As a result, organizations are investing in Bias Detection as a Service solutions not only to mitigate risks but also to demonstrate their commitment to responsible AI practices. This trend is expected to intensify as AI systems become more pervasive in everyday life.




    From a regional perspective, North America currently dominates the Bias Detection as a Service market, accounting for over 40% of the global revenue in 2024. This leadership is attributed to the region’s early adoption of AI technologies and a strong regulatory environment emphasizing fairness and accountability. Europe follows closely, with significant investments in ethical AI frameworks and compliance with GDPR-related mandates. Meanwhile, the Asia Pacific region is emerging as a high-growth market, fueled by rapid digital transformation, expanding AI research capabilities, and increasing government initiatives to address algorithmic bias. Latin America and the Middle East & Africa are also witnessing steady adoption, albeit at a slower pace due to infrastructural and regulatory challenges.



    Component Analysis



    The Bias Detection as a Service market is segmented by component into software and services, each playing a pivotal role in enabling organizations to address algorithmic bias. Software solutions form the backbone of the market, offering automated tools that integrate seamlessly with existing data pipelines and AI workflows. These platforms leverage advanced algorithms to scan datasets and models for signs of bias, providing actionable insights and recommendations for remediation. The software segment is characterized by continuous innovation, with vendors introducing features such as real-time bias monitoring, customizable fairness metrics, and integration with explainable AI modules. The demand for scalable, user-friendly, and interoperable software solutions is particularly strong among large enterprises and regulated industries.




    On the services side, consulting, implementation, and managed service

  5. Sampling biases of population groups at the county level from 2018 to 2022:...

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jan 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhenlong Li; Huan Ning; Fengrui Jing; M. Naser Lessani (2024). Sampling biases of population groups at the county level from 2018 to 2022: Median [minimum, maximum] of all counties in the US. [Dataset]. http://doi.org/10.1371/journal.pone.0294430.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 19, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zhenlong Li; Huan Ning; Fengrui Jing; M. Naser Lessani
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Sampling biases of population groups at the county level from 2018 to 2022: Median [minimum, maximum] of all counties in the US.

  6. G

    Bias Detection Platform Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Bias Detection Platform Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/bias-detection-platform-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Oct 6, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bias Detection Platform Market Outlook



    According to our latest research, the global Bias Detection Platform market size reached USD 1.42 billion in 2024, reflecting a surge in demand for advanced, ethical, and transparent decision-making tools across industries. The market is expected to grow at a CAGR of 17.8% during the forecast period, reaching a projected value of USD 6.13 billion by 2033. This robust growth is primarily driven by the increasing adoption of artificial intelligence (AI) and machine learning (ML) technologies, which has highlighted the urgent need for solutions that can identify and mitigate bias in automated systems and data-driven processes. As organizations worldwide strive for fairness, compliance, and inclusivity, bias detection platforms are becoming a cornerstone of responsible digital transformation.




    One of the key growth factors for the Bias Detection Platform market is the rapid integration of AI and ML algorithms into critical business operations. As enterprises leverage these technologies to automate decision-making in areas such as recruitment, financial services, and healthcare, the risk of unintentional bias in algorithms has become a significant concern. Regulatory bodies and industry watchdogs are increasingly mandating transparency and accountability in automated systems, prompting organizations to invest in bias detection platforms to ensure compliance and mitigate reputational risks. Furthermore, the proliferation of big data analytics has amplified the need for robust tools that can scrutinize massive datasets for hidden biases, ensuring that business insights and actions are both accurate and equitable.




    Another major driver fueling market growth is the heightened focus on diversity, equity, and inclusion (DEI) initiatives across both public and private sectors. Organizations are under mounting pressure from stakeholders, including customers, investors, and employees, to demonstrate their commitment to fair and unbiased practices. Bias detection platforms are being deployed to audit hiring processes, marketing campaigns, lending decisions, and other critical workflows, helping organizations identify and rectify discriminatory patterns. The increasing availability of advanced software and services that can seamlessly integrate with existing IT infrastructure is further accelerating adoption, making bias detection accessible to enterprises of all sizes.




    The evolution of regulatory frameworks and ethical standards around AI and data usage is also acting as a catalyst for market expansion. Governments and international bodies are introducing stringent guidelines to govern the ethical use of AI, with a particular emphasis on eliminating bias and ensuring fairness. This regulatory momentum is compelling organizations to adopt proactive measures, including the implementation of bias detection platforms, to avoid legal liabilities and maintain public trust. Additionally, the growing awareness of the social and economic consequences of biased systems is encouraging a broader range of industries to prioritize bias detection as a core component of their risk management and governance strategies.




    From a regional perspective, North America continues to dominate the Bias Detection Platform market, accounting for the largest share of global revenue in 2024. This leadership is attributed to the region’s early adoption of AI technologies, strong regulatory oversight, and a high concentration of technology-driven enterprises. Europe follows closely, benefiting from progressive data protection laws and a robust emphasis on ethical AI. Meanwhile, the Asia Pacific region is emerging as a high-growth market, driven by rapid digitalization, expanding IT infrastructure, and increasing awareness of bias-related challenges in diverse sectors. Latin America and the Middle East & Africa are also witnessing steady growth, supported by rising investments in digital transformation and regulatory advancements.





    Component Analysis



    The Bias Detection Platform market is

  7. d

    Replication Data for: Reducing Political Bias in Political Science Estimates...

    • dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zigerell, Lawrence (2023). Replication Data for: Reducing Political Bias in Political Science Estimates [Dataset]. http://doi.org/10.7910/DVN/PZLCJM
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Zigerell, Lawrence
    Description

    Political science researchers have flexibility in how to analyze data, how to report data, and whether to report on data. Review of examples of reporting flexibility from the race and sex discrimination literature illustrates how research design choices can influence estimates and inferences. This reporting flexibility—coupled with the political imbalance among political scientists—creates the potential for political bias in reported political science estimates, but this potential for political bias can be reduced or eliminated through preregistration and preacceptance, in which researchers commit to a research design before completing data collection. Removing the potential for reporting flexibility can raise the credibility of political science research.

  8. Data from: The Beauty Survey

    • data.europa.eu
    unknown
    Updated Nov 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2024). The Beauty Survey [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-13836855?locale=it
    Explore at:
    unknown(3847)Available download formats
    Dataset updated
    Nov 4, 2024
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the data collected during and the code used to analyze the Beauty Survey. A pre-print of the associated paper including our analysis and findings can be found at this link. Zip files have been uploaded to this repository to stay within the file limit enforced by Zenodo. To use our code and data simply unzip these folders while mainting the current directory structure. While the code here is complete and stands on its own, any new features added can be found in the GitHub repository associated with this project (link). AG and NO are supported by a nominal grant received at the ELLIS Unit Alicante Foundation from the Regional Government of Valencia in Spain (Convenio Singular signed with Generalitat Valenciana, Conselleria de Innovacion, Industria, Comercio y Turismo, Direccion General de Innovacion), along with grants from the European Union’s Horizon 2020 research and innovation programme - ELISE (grant agreement 951847) and ELIAS (grant agreement 101120237), and by grants from the Banc Sabadell Foundation and Intel corporation. BL is partially supported by the European Union’s Horizon Europe research and innovation program under grant agreement No. 101120237 (ELIAS) and by the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU.

  9. n

    Data from: Approach-induced biases in human information sampling

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    zip
    Updated Jan 5, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan (2017). Approach-induced biases in human information sampling [Dataset]. http://doi.org/10.5061/dryad.nb41c
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 5, 2017
    Dataset provided by
    University College London
    Authors
    Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    IInformation sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.

  10. NewsUnravel Dataset

    • zenodo.org
    csv
    Updated Sep 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    anonymous; anonymous (2023). NewsUnravel Dataset [Dataset]. http://doi.org/10.5281/zenodo.8344882
    Explore at:
    csvAvailable download formats
    Dataset updated
    Sep 14, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    anonymous; anonymous
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    About the Dataset
    Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address bias in news articles is to automatically detect and indicate it through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. To facilitate the data-gathering process, we introduce NewsUnravel, a news-reading web application leveraging an initially tested feedback mechanism to collect reader feedback on machine-generated bias highlights within news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, NewsUnravel shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnravel demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses, fluidly adapt to changes in language, and enhance evaluators' diversity.

    Description of the data files
    This repository contains the datasets for the anonymous NewsUnravel submission. The tables contain following data:

    NUDAdataset.csv: the NUDA dataset with 310 new sentences with bias labels
    Statistics.png: contains all Umami statistics for NewsUnravel's usage data
    Feedback.csv: holds the participantID of a single feedback with the sentence ID (contentId), the bias rating, and provided reasons
    Content.csv: holds the participant ID of a rating with the sentence ID (contentId) of a rated sentences and the bias rating, and reason, if given
    Article.csv: holds the article ID, title, source, article meta data, article topic, and bias amount in %
    Participant.csv: holds the participant IDs and data processing consent

  11. d

    Data from: Publication bias in gastroenterological research – a...

    • catalog.data.gov
    • healthdata.gov
    • +1more
    Updated Sep 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (2025). Publication bias in gastroenterological research – a retrospective cohort study based on abstracts submitted to a scientific meeting [Dataset]. https://catalog.data.gov/dataset/publication-bias-in-gastroenterological-research-a-retrospective-cohort-study-based-on-abs
    Explore at:
    Dataset updated
    Sep 7, 2025
    Dataset provided by
    National Institutes of Health
    Description

    Background The aim of this study was to examine the determinants of publication and whether publication bias occurred in gastroenterological research. Methods A random sample of abstracts submitted to DDW, the major GI meeting (1992–1995) was evaluated. The publication status was determined by database searches, complemented by a mailed survey to abstract authors. Determinants of publication were examined by Cox proportional hazards model and multiple logistic regression. Results The sample included abstracts on 326 controlled clinical trials (CCT), 336 other clinical research reports (OCR), and 174 basic science studies (BSS). 392 abstracts (47%) were published as full papers. Acceptance for presentation at the meeting was a strong predictor of subsequent publication for all research types (overall, 54% vs. 34%, OR 2.3, 95% CI 1.7 to 3.1). In the multivariate analysis, multi-center status was found to predict publication (OR 2.8, 95% CI 1.6–4.9). There was no significant association between direction of study results and subsequent publication. Studies were less likely to be published in high impact journals if the results were not statistically significant (OR 0.5, 95 CI 95% 0.3–0.6). The author survey identified lack of time or interest as the main reason for failure to publish. Conclusions Abstracts which were selected for presentation at the DDW are more likely to be followed by full publications. The statistical significance of the study results was not found to be a predictor of publication but influences the chances for high impact publication.

  12. d

    Replication Data for \"Behavioral biases and the decision-making in...

    • dataone.org
    • dataverse.harvard.edu
    Updated Nov 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nobre, Fábio Chaves; Machado, Maria José de Camargo; Nobre, Liana Holanda Nepomuceno (2023). Replication Data for \"Behavioral biases and the decision-making in entrepreneurs and managers\" published by RAC - Revista de Administração Contemporânea [Dataset]. http://doi.org/10.7910/DVN/CBI0D8
    Explore at:
    Dataset updated
    Nov 12, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Nobre, Fábio Chaves; Machado, Maria José de Camargo; Nobre, Liana Holanda Nepomuceno
    Description

    The text data used in the current study was collected through a semistructured interview: the questions used in the interview addressed what was the participant’s role in a decision was, what he did or did not do; what did the participant reckon the outcomes, the decision-making process; and questions which intended to bring elements of behavioral biases, both cognitive and emotional, in the decision-making process. Data analysis was supported by content analysis, The interviews were fully transcript, and the text files were imported into the software NVivo® v. 11. The software was used for organizing, managing, and coding data, as well as for generating maps for grouping the results for interpretation. Therefore, the file attached contains the text data, nodes, categorization, and maps used by the authors.

  13. o

    Worst Case Resistance Testing: A Nonresponse Bias Solution for Today’s...

    • openicpsr.org
    delimited
    Updated May 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephen L. France; Frank Adams; Myles Landers (2024). Worst Case Resistance Testing: A Nonresponse Bias Solution for Today’s Survey Research Realities [Dataset]. http://doi.org/10.3886/E203261V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    May 19, 2024
    Dataset provided by
    Mississippi State University
    Authors
    Stephen L. France; Frank Adams; Myles Landers
    License

    https://opensource.org/licenses/GPL-3.0https://opensource.org/licenses/GPL-3.0

    Description

    The dataset contains data gathered from a Qualtrics panel sample involving a previous retail shopping experience. The dataset was gathered to help test and validate methods for dealing with nonresponse bias in the following paper:France, S. L., Adams, F. G., Landers, V. M. (2024). Worst Case Resistance Testing: A Nonresponse Bias Solution for Today’s Survey Research Realities, forthcoming in Survey Research Methods.The rationale for the use of this sample was to gather response data from a well known and validated set of instruments. The questionnaire adapts instruments previous developed in a previous paper. As noted in the paper “as Szymanski & Henard (2001) have over 3,000 citations, and pose relatively simple questions, they were judged as liable to provide stable results, and unlikely to represent confounding factors due to their complexity.” Szymanski, D. M., & Henard, D. H. (2001). Customer satisfaction: A meta-analysis of the empirical evidence. Journal of the Academy of Marketing Science, 29(1), 16-35.

  14. D

    Bias Mitigation Tools AI Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Bias Mitigation Tools AI Market Research Report 2033 [Dataset]. https://dataintelo.com/report/bias-mitigation-tools-ai-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bias Mitigation Tools AI Market Outlook



    According to our latest research, the global market size for Bias Mitigation Tools AI reached USD 412 million in 2024, reflecting the sector’s rapid expansion as organizations prioritize ethical and transparent artificial intelligence. The market is projected to grow at a robust CAGR of 28.7% from 2025 to 2033, reaching an impressive USD 3.86 billion by 2033. This exponential growth is primarily driven by increasing regulatory pressure, rising demand for ethical AI solutions, and the need for organizations to ensure fairness and transparency in automated decision-making.




    The primary growth driver for the Bias Mitigation Tools AI market is the intensifying global focus on responsible AI. As artificial intelligence becomes deeply integrated into critical systems across sectors like healthcare, finance, and government, the risk of algorithmic bias causing unfair or discriminatory outcomes has come under intense scrutiny. Regulatory bodies worldwide are introducing strict guidelines and compliance frameworks, such as the European Union’s AI Act and the U.S. Algorithmic Accountability Act, compelling organizations to adopt bias mitigation tools. These tools leverage advanced AI and machine learning techniques to detect, measure, and correct biases in data and models, ensuring that AI-driven decisions are equitable and transparent. The increasing frequency of high-profile incidents involving biased AI outcomes has further propelled the demand for robust bias mitigation solutions in both public and private sectors.




    Another significant factor bolstering the market’s growth is the proliferation of AI applications across diverse industries. From personalized healthcare diagnostics to automated loan approvals in finance, and from predictive policing in government to adaptive learning in education, AI systems are now integral to decision-making processes. However, the complexity and opacity of these systems often make them susceptible to unintended biases, which can undermine organizational reputation and lead to costly legal repercussions. As a result, enterprises are investing heavily in bias mitigation tools AI to safeguard against these risks. Furthermore, advancements in explainable AI (XAI) and fairness-aware machine learning are enhancing the accuracy and usability of bias detection and correction solutions, making them more accessible to organizations of all sizes.




    The growing emphasis on corporate social responsibility (CSR) and diversity, equity, and inclusion (DEI) initiatives is also fueling market growth. Stakeholders, including consumers, investors, and employees, are demanding greater accountability and fairness in AI-driven processes. Organizations recognize that deploying bias mitigation tools AI not only aligns with ethical imperatives but also enhances brand value and stakeholder trust. This trend is especially pronounced in sectors like retail and e-commerce, where customer-facing algorithms directly impact user experience and satisfaction. As a result, the integration of bias mitigation tools is becoming a strategic priority for enterprises seeking to maintain competitive advantage and comply with evolving societal expectations.




    Regionally, North America remains the largest market for bias mitigation tools AI, accounting for over 42% of the global market share in 2024. This dominance is attributed to the region’s advanced AI ecosystem, proactive regulatory landscape, and high adoption rates among large enterprises. Europe follows closely, driven by stringent data protection and AI ethics regulations. Meanwhile, Asia Pacific is emerging as the fastest-growing region, propelled by rapid digital transformation, expanding AI investments, and increasing awareness of ethical AI practices. Latin America and the Middle East & Africa are also witnessing steady growth, albeit from a smaller base, as governments and enterprises in these regions begin to prioritize responsible AI deployment.



    Component Analysis



    The Bias Mitigation Tools AI market is segmented by component into software and services, each playing a pivotal role in the adoption and effectiveness of bias mitigation strategies. The software segment dominates the market, accounting for nearly 68% of the total revenue in 2024. This segment comprises standalone bias detection and correction platforms, AI model auditing tools, and integrated modules within broader AI development su

  15. b

    Marketing Bias data

    • berd-platform.de
    txt
    Updated Jul 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mengting Wan; Jianmo Ni; Rishabh Misra; Julian McAuley; Mengting Wan; Jianmo Ni; Rishabh Misra; Julian McAuley (2025). Marketing Bias data [Dataset]. http://doi.org/10.82939/jp1cd-gne79
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 31, 2025
    Dataset provided by
    ACM
    Authors
    Mengting Wan; Jianmo Ni; Rishabh Misra; Julian McAuley; Mengting Wan; Jianmo Ni; Rishabh Misra; Julian McAuley
    License

    https://github.com/MengtingWan/marketBias/blob/master/LICENSEhttps://github.com/MengtingWan/marketBias/blob/master/LICENSE

    Description

    These datasets contain attributes about products sold on ModCloth and of the Electronics category on Amazon which may be sources of bias in recommendations (in particular, attributes about how the products are marketed). Data also includes user/item interactions for recommendation. The dataset includes 99,893 reviews for ModCloth and 1,292,954 reviews for the Electronics category of Amazon.

  16. d

    Replication Data for: Words That Stick Predicting Decision Making and...

    • search.dataone.org
    • data.mendeley.com
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dvir, Nimrod (2023). Replication Data for: Words That Stick Predicting Decision Making and Synonym Engagement Using Cognitive Biases and Computational Linguistics [Dataset]. http://doi.org/10.7910/DVN/J5LTYE
    Explore at:
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Dvir, Nimrod
    Description

    This research utilizes cognitive neuroscience and information systems research to predict user engagement and decision-making in digital platforms. By applying Natural Language Processing (NLP) techniques and cognitive bias theories, we investigate user interactions with synonyms in digital content. Our approach incorporates four cognitive biases - representativeness, ease-of-use (processing fluency), affect-biased attention, and distribution/availability (R.E.A.D) - into a comprehensive model. The model's predictive capacity was evaluated using a large user survey, revealing that synonyms representative of core concepts, easy to process, emotionally resonant, and readily available, fostered increased user engagement. Importantly, our research provides a novel perspective on human-computer interaction, digital habits, and decision-making processes. Findings underscore the potential of cognitive biases as powerful predictors of user engagement, emphasizing their role in effective digital content design across education, marketing, and beyond.

  17. Emails to authors and responses and overall unclear risk of bias (data sets...

    • figshare.com
    xlsx
    Updated Feb 18, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kieran Shah (2016). Emails to authors and responses and overall unclear risk of bias (data sets 3 and 4) [Dataset]. http://doi.org/10.6084/m9.figshare.2324599.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 18, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Kieran Shah
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Emails to authors and responses for data sets 3 and 4. Assigned risk of bias in data sets 3 and 4. Articles initially assigned as unclear risk also determined to be low, high, or further unclear risk based on author theme responses and group consensus for data sets 3 and 4.

  18. Data from: Questioning Bias: Validating a Bias Crime Assessment Tool in...

    • catalog.data.gov
    • icpsr.umich.edu
    Updated Nov 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Justice (2025). Questioning Bias: Validating a Bias Crime Assessment Tool in California and New Jersey, 2016-2017 [Dataset]. https://catalog.data.gov/dataset/questioning-bias-validating-a-bias-crime-assessment-tool-in-california-and-new-jersey-2016-a062f
    Explore at:
    Dataset updated
    Nov 14, 2025
    Dataset provided by
    National Institute of Justicehttp://nij.ojp.gov/
    Area covered
    California, New Jersey
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed. This study investigates experiences surrounding hate and bias crimes and incidents and reasons and factors affecting reporting and under-reporting among youth and adults in LGBT, immigrant, Hispanic, Black, and Muslim communities in New Jersey and Los Angeles County, California. The collection includes 1 SPSS data file (QB_FinalDataset-Revised.sav (n=1,326; 513 variables)). The collection also contains 24 qualitative data files of transcripts from focus groups and interviews with key informants, which are not included in this release.

  19. G

    Bias Detection for AI Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Bias Detection for AI Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/bias-detection-for-ai-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bias Detection for AI Market Outlook



    According to our latest research, the global Bias Detection for AI market size reached USD 1.27 billion in 2024, reflecting a rapidly maturing industry driven by mounting regulatory pressures and the demand for trustworthy AI systems. The market is projected to grow at a robust CAGR of 28.6% from 2025 to 2033, culminating in a forecasted market size of USD 11.16 billion by 2033. Growth in this sector is primarily fueled by the proliferation of AI applications across critical industries, increasing awareness of algorithmic fairness, and the escalating need for compliance with evolving global regulations.




    A significant growth factor for the Bias Detection for AI market is the rising adoption of artificial intelligence and machine learning across diverse industry verticals, including BFSI, healthcare, retail, and government. As enterprises leverage AI to automate decision-making processes, the risk of embedding and amplifying biases inherent in training data or model architectures has become a major concern. This has led to increased investments in bias detection solutions, as organizations strive to ensure ethical AI deployment, protect brand reputation, and avoid costly regulatory penalties. Furthermore, the growing sophistication of AI models, such as deep learning and generative AI, has heightened the complexity of bias identification, necessitating advanced detection tools and services that can operate at scale and in real time.




    Another key driver is the intensifying regulatory landscape surrounding AI ethics and accountability. Governments and regulatory bodies in North America, Europe, and Asia Pacific are introducing stringent guidelines mandating transparency, explainability, and fairness in AI systems. For example, the European Union’s AI Act and the United States’ Algorithmic Accountability Act are compelling organizations to implement robust bias detection frameworks as part of their compliance strategies. The threat of legal liabilities, coupled with the need to maintain consumer trust, is prompting enterprises to prioritize investment in bias detection technologies. This regulatory push is also fostering innovation among solution providers, resulting in a surge of new products and services tailored to specific industry requirements.




    The increasing recognition of the business value of ethical AI is further accelerating market growth. Enterprises are now viewing bias detection not merely as a compliance requirement, but as a critical enabler of competitive differentiation. By proactively addressing bias, organizations can unlock new customer segments, enhance user experience, and drive innovation in product development. The integration of bias detection tools into AI development pipelines is also streamlining model validation and governance, reducing time-to-market for AI solutions while ensuring alignment with ethical standards. As a result, bias detection is becoming an integral component of enterprise AI strategies, driving sustained demand for both software and services in this market.




    Regionally, North America is poised to maintain its dominance in the Bias Detection for AI market, owing to the presence of major technology vendors, proactive regulatory initiatives, and high AI adoption rates across industries. However, Asia Pacific is emerging as a high-growth region, fueled by rapid digital transformation, increasing regulatory scrutiny, and the expansion of AI research ecosystems in countries like China, Japan, and India. Europe, with its strong emphasis on data privacy and ethical AI, is also witnessing significant investments in bias detection solutions. The convergence of these regional dynamics is creating a vibrant global market landscape, characterized by diverse adoption patterns and evolving customer needs.





    Component Analysis



    The Bias Detection for AI market is segmented by component into software and services, each playing a pivotal role in addressing the multifaceted challenges of AI bias. The software segment acco

  20. Data from: Assessing and Correcting Neighborhood Socioeconomic Spatial...

    • zenodo.org
    bin
    Updated Jul 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Álvaro Padilla-Pozo; Frederic Bartumeus; Frederic Bartumeus; Tomás Montalvo; Tomás Montalvo; Isis Sanpera-Calbet; Isis Sanpera-Calbet; Andrea Valsecchi; John R.B. Palmer; John R.B. Palmer; Álvaro Padilla-Pozo; Andrea Valsecchi (2024). Assessing and Correcting Neighborhood Socioeconomic Spatial Sampling Biases in Citizen Science Mosquito Data Collection [Dataset]. http://doi.org/10.5281/zenodo.12605540
    Explore at:
    binAvailable download formats
    Dataset updated
    Jul 1, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Álvaro Padilla-Pozo; Frederic Bartumeus; Frederic Bartumeus; Tomás Montalvo; Tomás Montalvo; Isis Sanpera-Calbet; Isis Sanpera-Calbet; Andrea Valsecchi; John R.B. Palmer; John R.B. Palmer; Álvaro Padilla-Pozo; Andrea Valsecchi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Reporting data from the Mosquito Alert citizen science system, active catch basin surveillance, and mosquito trap surveillance used in "Assessing and Correcting Neighborhood Socioeconomic Spatial Sampling Biases in Citizen Science Mosquito Data Collection."

    The file named mosquito_alert_adult_bite_reports_Barcelona_2014_2023.Rds includes all adult mosquito and mosquito bite reports received from Barcelona Municipality from the start of the Mosqiuto Alert project in 2014 through the end of 2023. The file named mosquito_alert_validated_albopictus_reports_Barcelona_2014_23.Rds includes all expert-validated Ae. albopictus reports received from Barcelona Municipality during the same time period. The data is stored as RDS files and contain the following fields:

    • year - the year in which the report was made. Class = dbl.
    • date - the date om which the report was made. Class = date.
    • type - the report type, either adult mosquito ("adult") or mosquito breeding site ("site"). Class = chr.
    • lon - the longitude of the report location. Class = dbl.
    • lat - the latitude of the report location. Class = dbl.
    • validation_score - Entolab validation score. Either 1 (possible Ae. albopictus) or 2 (probable Ae. albopictus). This field is present only in the validated reports data.

    The file named active_catch_basin_drain_data.Rds includes information about all catch basin drains in Barcelona Municipality in which the Barcelona Public Health Agency (ASPB) detected mosquito activity as part of its continuous monitoring and control of mosquitoes from 2019 through 2023. The data is stored in an RDS file with the following fields:

    • any_reports - dummy variable indicating whether any Mosquito Alert adult mosquito or mosquito bite reports were sent through Mosquito Alert from within 200 m of the catch basin drain during the year in which the ASPB detected mosquito activity in hte catch basin drain. Class = lgl.
    • se_expected - sampling effort for the 0.025 degree lon/lat sampling cell in which the catch basin drain lies during the year in which the ASPB detected mosquito activity in the drain. This value is taken from the SE_expected variable in the sampling_effort_daily_cellres_025.csv.gz file available at https://zenodo.org/records/12602985. Sampling effort is estimated as the expected number of participants sending at least one report from the cell during the day in question given the the number of participants recorded in the cell that day and the amount of time elapsed since each one began participating in the project. Class = dbl.
    • p_singlehh - proportion of single-member households in the population of the census tract in which the catch basin drain is located. Class = dbl.
    • mean_age - mean age of the population of the census tract in which the catch basin drain is located. Class = dbl.
    • mean_rent_consumption_unit - mean income per consumption unit in the census tract in which the catch basin drain is located. Class = dbl.
    • popd - population density of the census tract in which the catch basin drain is located. Class = dbl.
    • id_item - unique identifier given to the catch basin drain. Drain itentifiers appear multiple times in the data when the ASPB detected activity in the drain in multiple years. Class = dbl.

    The file named trap_data.Rds includes information on the adult mosquito trap surveillance analyzed in this article. The data is stored in an RDS file with the following fields:

    • females - number of Ae. albopictus females found in the trap. Class = dbl.
    • trap_name - unique identifier for the trap. Class = chr.
    • trapping_effort - number of days from when the trap was set to when it was checked. Class = dbl.
    • date - date on which the trap was checked. Class = date.
    • mean_tm30 - mean temperature for the 30 days leading up to the date on which the trap was checked. Class = dbl.
    • mean_rent_consumption_unit - mean income per consumption unit for the census tract in which the trap was located. Class = dbl.
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Simon Hug (2010). Replication data for: Selection Bias in Comparative Research: The Case of Incomplete Data Sets [Dataset]. http://doi.org/10.7910/DVN/QO28VG

Replication data for: Selection Bias in Comparative Research: The Case of Incomplete Data Sets

Explore at:
96 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Mar 8, 2010
Dataset provided by
Harvard Dataverse
Authors
Simon Hug
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

Selection bias is an important but often neglected problem in comparative research. While comparative case studies pay some attention to this problem, this is less the case in broader cross-national studies, where this problem may appear through the way the data used are generated. The article discusses three examples: studies of the success of newly formed political parties, research on protest events, and recent work on ethnic conflict. In all cases the data at hand are likely to be afflicted by selection bias. Failing to take into consideration this problem leads to serious biases in the estimation of simple relationships. Empirical examples illustrate a possible solution (a variation of a Tobit model) to the problems in these cases. The article also discusses results of Monte Carlo simulations, illustrating under what conditions the proposed estimation procedures lead to improved results.

Search
Clear search
Close search
Google apps
Main menu