100+ datasets found
  1. Evidence accumulation is biased by motivation: A computational account

    • plos.figshare.com
    docx
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Filip Gesiarz; Donal Cahill; Tali Sharot (2023). Evidence accumulation is biased by motivation: A computational account [Dataset]. http://doi.org/10.1371/journal.pcbi.1007089
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Filip Gesiarz; Donal Cahill; Tali Sharot
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To make good judgments people gather information. An important problem an agent needs to solve is when to continue sampling data and when to stop gathering evidence. We examine whether and how the desire to hold a certain belief influences the amount of information participants require to form that belief. Participants completed a sequential sampling task in which they were incentivized to accurately judge whether they were in a desirable state, which was associated with greater rewards than losses, or an undesirable state, which was associated with greater losses than rewards. While one state was better than the other, participants had no control over which they were in, and to maximize rewards they had to maximize accuracy. Results show that participants’ judgments were biased towards believing they were in the desirable state. They required a smaller proportion of supporting evidence to reach that conclusion and ceased gathering samples earlier when reaching the desirable conclusion. The findings were replicated in an additional sample of participants. To examine how this behavior was generated we modeled the data using a drift-diffusion model. This enabled us to assess two potential mechanisms which could be underlying the behavior: (i) a valence-dependent response bias and/or (ii) a valence-dependent process bias. We found that a valence-dependent model, with both a response bias and a process bias, fit the data better than a range of other alternatives, including valence-independent models and models with only a response or process bias. Moreover, the valence-dependent model provided better out-of-sample prediction accuracy than the valence-independent model. Our results provide an account for how the motivation to hold a certain belief decreases the need for supporting evidence. The findings also highlight the advantage of incorporating valence into evidence accumulation models to better explain and predict behavior.

  2. G

    Bias Detection Platform Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Bias Detection Platform Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/bias-detection-platform-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Oct 6, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Bias Detection Platform Market Outlook



    According to our latest research, the global Bias Detection Platform market size reached USD 1.42 billion in 2024, reflecting a surge in demand for advanced, ethical, and transparent decision-making tools across industries. The market is expected to grow at a CAGR of 17.8% during the forecast period, reaching a projected value of USD 6.13 billion by 2033. This robust growth is primarily driven by the increasing adoption of artificial intelligence (AI) and machine learning (ML) technologies, which has highlighted the urgent need for solutions that can identify and mitigate bias in automated systems and data-driven processes. As organizations worldwide strive for fairness, compliance, and inclusivity, bias detection platforms are becoming a cornerstone of responsible digital transformation.




    One of the key growth factors for the Bias Detection Platform market is the rapid integration of AI and ML algorithms into critical business operations. As enterprises leverage these technologies to automate decision-making in areas such as recruitment, financial services, and healthcare, the risk of unintentional bias in algorithms has become a significant concern. Regulatory bodies and industry watchdogs are increasingly mandating transparency and accountability in automated systems, prompting organizations to invest in bias detection platforms to ensure compliance and mitigate reputational risks. Furthermore, the proliferation of big data analytics has amplified the need for robust tools that can scrutinize massive datasets for hidden biases, ensuring that business insights and actions are both accurate and equitable.




    Another major driver fueling market growth is the heightened focus on diversity, equity, and inclusion (DEI) initiatives across both public and private sectors. Organizations are under mounting pressure from stakeholders, including customers, investors, and employees, to demonstrate their commitment to fair and unbiased practices. Bias detection platforms are being deployed to audit hiring processes, marketing campaigns, lending decisions, and other critical workflows, helping organizations identify and rectify discriminatory patterns. The increasing availability of advanced software and services that can seamlessly integrate with existing IT infrastructure is further accelerating adoption, making bias detection accessible to enterprises of all sizes.




    The evolution of regulatory frameworks and ethical standards around AI and data usage is also acting as a catalyst for market expansion. Governments and international bodies are introducing stringent guidelines to govern the ethical use of AI, with a particular emphasis on eliminating bias and ensuring fairness. This regulatory momentum is compelling organizations to adopt proactive measures, including the implementation of bias detection platforms, to avoid legal liabilities and maintain public trust. Additionally, the growing awareness of the social and economic consequences of biased systems is encouraging a broader range of industries to prioritize bias detection as a core component of their risk management and governance strategies.




    From a regional perspective, North America continues to dominate the Bias Detection Platform market, accounting for the largest share of global revenue in 2024. This leadership is attributed to the region’s early adoption of AI technologies, strong regulatory oversight, and a high concentration of technology-driven enterprises. Europe follows closely, benefiting from progressive data protection laws and a robust emphasis on ethical AI. Meanwhile, the Asia Pacific region is emerging as a high-growth market, driven by rapid digitalization, expanding IT infrastructure, and increasing awareness of bias-related challenges in diverse sectors. Latin America and the Middle East & Africa are also witnessing steady growth, supported by rising investments in digital transformation and regulatory advancements.





    Component Analysis



    The Bias Detection Platform market is

  3. Data from: Citizen science can complement professional invasive plant...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Sep 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Monica Dimson (2024). Citizen science can complement professional invasive plant surveys and improve estimates of suitable habitat [Dataset]. http://doi.org/10.5068/D1769Q
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 11, 2024
    Dataset provided by
    University of California, Los Angeles
    Authors
    Monica Dimson
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Aim: Citizen science is a cost-effective potential source of invasive species occurrence data. However, data quality issues due to unstructured sampling approaches may discourage the use of these observations by science and conservation professionals. This study explored the utility of low-structure iNaturalist citizen science data in invasive plant monitoring. We first examined the prevalence of invasive taxa in iNaturalist plant observations and sampling biases associated with those data. Using four invasive species as examples, we then compared iNaturalist and professional agency observations and used the two datasets to model suitable habitat for each species. Location: Hawaiʻi, USA Methods: To estimate the prevalence of invasive plant data, we compared the number of species and observations recorded in iNaturalist to botanical checklists for Hawaiʻi. Sampling bias was quantified along gradients of site accessibility, protective status, and vegetation disturbance using a bias index. Habitat suitability for four invasive species was modeled in Maxent, using observations from iNaturalist, professional agencies, and stratified subsets of iNaturalist data. Results: iNaturalist plant observations were biased toward invasive species, which were frequently recorded in areas with higher road/trail density and vegetation disturbance. Professional observations of four example invasive species tended to occur in less accessible, native-dominated sites. Habitat suitability models based on iNaturalist versus professional data showed moderate overlap and different distributions of suitable habitat across vegetation disturbance classes. Stratifying iNaturalist observations had little effect on how suitable habitat was distributed for the species modeled in this study. Main conclusions: Opportunistic iNaturalist observations have the potential to complement and expand professional invasive plant monitoring, which we found was often affected by inverse sampling biases. Invasive species represented a high proportion of iNaturalist plant observations, and were recorded in environments that were not captured by professional surveys. Combining the datasets thus led to more comprehensive estimates of suitable habitat.

  4. o

    Data and Code for The Short- and the Long-Run Impact of Gender-Biased...

    • openicpsr.org
    Updated Sep 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Victor Lavy; Rigissa Megalokonomou (2022). Data and Code for The Short- and the Long-Run Impact of Gender-Biased Teachers [Dataset]. http://doi.org/10.3886/E179241V1
    Explore at:
    Dataset updated
    Sep 4, 2022
    Dataset provided by
    American Economic Association
    Authors
    Victor Lavy; Rigissa Megalokonomou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2002 - 2012
    Area covered
    Greece
    Description

    We examine the persistence of teachers' gender biases by following teachers over time in different classes. Wend a very high correlation of gender biases for teachers across their classes. Based on out-of-sample measures of these biases, we estimate the substantial effects of these biases on students' performance in university admission exams, choice of university eld of study, and quality of the enrolled program. The effects on university choice outcomes are larger for girls, explaining some gender differences in STEM majors. Part of these effects, which are more prevalent among less effective teachers, are mediated by changing school attendance.---These are the data that produce the results found in the related paper.

  5. f

    Data from: Towards Identifying and Reducing the Bias of Disease Information...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Jun 9, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhang, Hong-Yan; Sui, Daniel Z.; Wang, Jin-Feng; Huang, Ji-Xia; Xu, Cheng-Dong; Hu, Mao-Gui; Huang, Da-Cang (2016). Towards Identifying and Reducing the Bias of Disease Information Extracted from Search Engine Data [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001587385
    Explore at:
    Dataset updated
    Jun 9, 2016
    Authors
    Zhang, Hong-Yan; Sui, Daniel Z.; Wang, Jin-Feng; Huang, Ji-Xia; Xu, Cheng-Dong; Hu, Mao-Gui; Huang, Da-Cang
    Description

    The estimation of disease prevalence in online search engine data (e.g., Google Flu Trends (GFT)) has received a considerable amount of scholarly and public attention in recent years. While the utility of search engine data for disease surveillance has been demonstrated, the scientific community still seeks ways to identify and reduce biases that are embedded in search engine data. The primary goal of this study is to explore new ways of improving the accuracy of disease prevalence estimations by combining traditional disease data with search engine data. A novel method, Biased Sentinel Hospital-based Area Disease Estimation (B-SHADE), is introduced to reduce search engine data bias from a geographical perspective. To monitor search trends on Hand, Foot and Mouth Disease (HFMD) in Guangdong Province, China, we tested our approach by selecting 11 keywords from the Baidu index platform, a Chinese big data analyst similar to GFT. The correlation between the number of real cases and the composite index was 0.8. After decomposing the composite index at the city level, we found that only 10 cities presented a correlation of close to 0.8 or higher. These cities were found to be more stable with respect to search volume, and they were selected as sample cities in order to estimate the search volume of the entire province. After the estimation, the correlation improved from 0.8 to 0.864. After fitting the revised search volume with historical cases, the mean absolute error was 11.19% lower than it was when the original search volume and historical cases were combined. To our knowledge, this is the first study to reduce search engine data bias levels through the use of rigorous spatial sampling strategies.

  6. MBIC – A Media Bias Annotation Dataset

    • kaggle.com
    zip
    Updated Jan 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Timo Spinde (2024). MBIC – A Media Bias Annotation Dataset [Dataset]. https://www.kaggle.com/timospinde/mbic-a-media-bias-annotation-dataset
    Explore at:
    zip(4599669 bytes)Available download formats
    Dataset updated
    Jan 22, 2024
    Authors
    Timo Spinde
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Find more and related research on: https://media-bias-research.org

    Many people consider news articles to be a reliable source of information on current events. However, due to the range of factors influencing news agencies, such coverage may not always be impartial. Media bias, or slanted news coverage, can have a substantial impact on public perception of events, and, accordingly, can potentially alter the beliefs and views of the public. The main data gap in current research on media bias detection is a robust, representative, and diverse dataset containing annotations of biased words and sentences. In particular, existing datasets do not control for the individual background of annotators, which may affect their assessment and, thus, represents critical information for contextualizing their annotations. In this poster, we present a matrix-based methodology to crowdsource such data using a self-developed annotation platform. We also present MBIC (Media Bias Including Characteristics) - the first sample of 1,700 statements representing various media bias instances. The statements were reviewed by ten annotators each and contain labels for media bias identification both on the word and sentence level. MBIC is the first available dataset about media bias reporting detailed information on annotator characteristics and their individual background. The current dataset already significantly extends existing data in this domain providing unique and more reliable insights into the perception of bias. In future, we will further extend it both with respect to the number of articles and annotators per article.

    You find and cite the paper describing the data set with

    T. Spinde, L. Rudnitckaia, K. Sinha, F. Hamborg, B. Gipp, K. Donnay “MBIC – A Media Bias Annotation Dataset Including Annotator Characteristics”. In: Proceedings of the iConference 2021. BibTex: @InProceedings{Spinde2021MBIC, title = {MBIC – A Media Bias Annotation Dataset Including Annotator Characteristics}, booktitle = {Proceedings of the iConference 2021}, author = {Spinde, Timo and Rudnitckaia, Lada and Sinha, Kanishka, and Hamborg, Felix and and Gipp, Bela and Donnay, Karsten}, year = {2021}, location = {Beijing, China (Virtual Event)}, month = {March}, topic = {newsanalysis}, }

  7. f

    Data from: Mapping Species Distributions with MAXENT Using a Geographically...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated May 12, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Engler, Jan O.; Rödder, Dennis; Fourcade, Yoan; Secondi, Jean (2014). Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001238306
    Explore at:
    Dataset updated
    May 12, 2014
    Authors
    Engler, Jan O.; Rödder, Dennis; Fourcade, Yoan; Secondi, Jean
    Description

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  8. d

    Replication data for: Testing for Publication Bias in Political Science

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alan Gerber; Donald Green; David Nickerson (2023). Replication data for: Testing for Publication Bias in Political Science [Dataset]. http://doi.org/10.7910/DVN/DQC9KV
    Explore at:
    Dataset updated
    Nov 20, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Alan Gerber; Donald Green; David Nickerson
    Description

    If the publication decisions of journals are a function of the statistical significance of research findings, the published literature may suffer from “publication bias.” This paper describes a method for detecting publication bias. We point out that to achieve statistical significance, the effect size must be larger in small samples. If publications tend to be biased against statistically insignificant results, we should observe that the effect size diminishes as sample sizes increase. This proposition is tested and confirmed using the experimental literature on voter mobilization.

  9. Data from: Improving short-term grade block models: alternative for...

    • scielo.figshare.com
    jpeg
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cristina da Paixão Araújo; João Felipe Coimbra Leite Costa; Vanessa Cerqueira Koppe (2023). Improving short-term grade block models: alternative for correcting soft data [Dataset]. http://doi.org/10.6084/m9.figshare.5772303.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    Cristina da Paixão Araújo; João Felipe Coimbra Leite Costa; Vanessa Cerqueira Koppe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract Short-term mining planning typically relies on samples obtained from channels or less-accurate sampling methods. The results may include larger sampling errors than those derived from diamond drill hole core samples. The aim of this paper is to evaluate the impact of the sampling error on grade estimation and propose a method of correcting the imprecision and bias in the soft data. In addition, this paper evaluates the benefits of using soft data in mining planning. These concepts are illustrated via a gold mine case study, where two different data types are presented. The study used Au grades collected via diamond drilling (hard data) and channels (soft data). Four methodologies were considered for estimation of the Au grades of each block to be mined: ordinary kriging with hard and soft data pooled without considering differences in data quality; ordinary kriging with only hard data; standardized ordinary kriging with pooled hard and soft data; and standardized, ordinary cokriging. The results show that even biased samples collected using poor sampling protocols improve the estimates more than a limited number of precise and unbiased samples. A welldesigned estimation method corrects the biases embedded in the samples, mitigating their propagation to the block model.

  10. Data from: Approach-induced biases in human information sampling

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    pdf, zip
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan; Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan (2024). Data from: Approach-induced biases in human information sampling [Dataset]. http://doi.org/10.5061/dryad.nb41c
    Explore at:
    zip, pdfAvailable download formats
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan; Laurence T. Hunt; Robb B. Rutledge; W. M. Nishantha Malalasekera; Steven W. Kennerley; Raymond J. Dolan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    IInformation sampling is often biased towards seeking evidence that confirms one's prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled ("positive evidence approach"), the selection of which information to sample ("sampling the favorite"), and the interaction between information sampling and subsequent choices ("rejecting unsampled options"). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.

  11. Women in Headlines: Bias

    • kaggle.com
    zip
    Updated Jan 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Women in Headlines: Bias [Dataset]. https://www.kaggle.com/datasets/thedevastator/women-in-headlines-bias
    Explore at:
    zip(30108592 bytes)Available download formats
    Dataset updated
    Jan 22, 2023
    Authors
    The Devastator
    Description

    Women in Headlines: Bias

    Investigating Gendered Language, Temporal Trends, and Themes

    By Amber Thomas [source]

    About this dataset

    This dataset contains all of the data used in the Pudding essay When Women Make Headlines published in January 2022. This dataset was created to analyze gendered language, bias and language themes in news headlines from across the world. It contains headlines from top50 news publications and news agencies from four major countries - USA, UK, India and South Africa - as published by SimilarWeb (as of 2021-06-06).

    To collect this data we used RapidAPI's google news API to query headlines containing one or more of keywords selected based on existing research done by Huimin Xu & team and The Swaddle team. We analyzed words used in headlines manually curating two dictionaries — gendered words about women (words that are explicitly gendered) and words that denote societal/behavioral stereotypes about women. To calculate bias scores, we utilized technology developed through Yasmeen Hitti & team’s research on gender bias text analysis. To categorize words used into themes (violence/crime, empowerment, race/ethnicity/identity etc), we manually curated four dictionaries utilizing Natural Language Processing packages for Python like spacy & nltk for our analysis. Plus, inverting polarity scores with vaderSentiment algorithm helped us shed light on differences between women-centered/non-women centered polarity levels as well as differences between global polarity baselines of each country's most visited publications & news agencies according to SimilarWeb 2020 statistics..

    This dataset enables journalists, researchers and educators researching issues related to gender equity within media outlets around the world further insights into potential disparities with just a few lines of code! Any discoveries made by using this data should provide valuable support for evidence-based argumentation . Let us advocate for greater awareness towards female representation better quality coverage!

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset provides a comprehensive look at the portrayal of women in headlines from 2010-2020. Using this dataset, researchers and data scientists can explore a range of topics including language used to describe women, bias associated with different topics or publications, and temporal patterns in headlines about women over time.

    To use this dataset effectively, it is helpful to understand the structure of the data. The columns include headline_no_site (the text of the headline without any information about which publication it is from), time (the date and time that the article was published), country (the country where it was published), bias score (calculated using Gender Bias Taxonomy V1.0) and year (the year that the article was published).

    By exploring these columns individually or combining them into groups such as by publication or by topic, there are many ways to make meaningful discoveries using this data set. For example, one could explore if certain news outlets employ more gender-biased language when writing about female subjects than other outlets or investigate whether female-centric stories have higher/lower bias scores than average for a particular topic across multiple countries over time. This type of analysis helps researchers to gain insight into how our culture's dialogue has evolved over recent years as relates to women in media coverage worldwide

    Research Ideas

    • A comparative, cross-country study of the usage of gendered language and the prevalence of gender bias in headlines to better understand regional differences.
    • Creating an interactive visualization showing the evolution of headline bias scores over time with respect to a certain topic or population group (such as women).
    • Analyzing how different themes are covered in headlines featuring women compared to those without, such as crime or violence versus empowerment or race and ethnicity, to see if there’s any difference in how they are portrayed by the media

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    See the dataset description for more information.

    Columns

    File: headlines_reduced_temporal.csv | Column name | Description | |:---------------------|:-------------------------------------------------------------------------------------...

  12. u

    Marketing Bias data

    • cseweb.ucsd.edu
    json
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UCSD CSE Research Project, Marketing Bias data [Dataset]. https://cseweb.ucsd.edu/~jmcauley/datasets.html
    Explore at:
    jsonAvailable download formats
    Dataset authored and provided by
    UCSD CSE Research Project
    Description

    These datasets contain attributes about products sold on ModCloth and Amazon which may be sources of bias in recommendations (in particular, attributes about how the products are marketed). Data also includes user/item interactions for recommendation.

    Metadata includes

    • ratings

    • product images

    • user identities

    • item sizes, user genders

  13. f

    Data from: Biased cognition in East Asian and Western cultures

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Oct 15, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    André, Julia; Chen, Lu Hua; Toulopoulou, Timothea; Sham, Pak; Parkinson, Brian; Chen, Eric; Smith, Louise; Yiend, Jenny (2019). Biased cognition in East Asian and Western cultures [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000188126
    Explore at:
    Dataset updated
    Oct 15, 2019
    Authors
    André, Julia; Chen, Lu Hua; Toulopoulou, Timothea; Sham, Pak; Parkinson, Brian; Chen, Eric; Smith, Louise; Yiend, Jenny
    Area covered
    East Asia
    Description

    The majority of cognitive bias research has been conducted in Western cultures. We examined cross-cultural differences in cognitive biases, comparing Westerners’ and East Asians’ performance and acculturation following migration to the opposite culture. Two local (UK, Hong Kong) and four migrant (short-term and long-term migrants to each culture) samples completed culturally validated tasks measuring attentional and interpretation bias. Hong Kong residents were more positively biased than people living in the UK on several measures, consistent with the lower prevalence of psychological disorders in East Asia. Migrants to the UK had reduced positive biases on some tasks, while migrants to Hong Kong were more positive, compared to their respective home counterparts, consistent with acculturation in attention and interpretation biases. These data illustrate the importance of cultural validation of findings and, if replicated, would have implications for the mental health and well-being of migrants.

  14. Resampling methods.

    • plos.figshare.com
    bin
    Updated Jul 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Annie Kim; Inkyung Jung (2023). Resampling methods. [Dataset]. http://doi.org/10.1371/journal.pone.0288540.t001
    Explore at:
    binAvailable download formats
    Dataset updated
    Jul 27, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Annie Kim; Inkyung Jung
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Class imbalance is a major problem in classification, wherein the decision boundary is easily biased toward the majority class. A data-level solution (resampling) is one possible solution to this problem. However, several studies have shown that resampling methods can deteriorate the classification performance. This is because of the overgeneralization problem, which occurs when samples produced by the oversampling technique that should be represented in the minority class domain are introduced into the majority-class domain. This study shows that the overgeneralization problem is aggravated in complex data settings and introduces two alternate approaches to mitigate it. The first approach involves incorporating a filtering method into oversampling. The second approach is to apply undersampling. The main objective of this study is to provide guidance on selecting optimal resampling methods in imbalanced and complex datasets to improve classification performance. Simulation studies and real data analyses were performed to compare the resampling results in various scenarios with different complexities, imbalances, and sample sizes. In the case of noncomplex datasets, undersampling was found to be optimal. However, in the case of complex datasets, applying a filtering method to delete misallocated examples was optimal. In conclusion, this study can aid researchers in selecting the optimal method for resampling complex datasets.

  15. H

    Replication Data (A) for 'Biased Programmers or Biased Data?': Individual...

    • dataverse.harvard.edu
    Updated Sep 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bo Cowgill; Fabrizio Dell'Acqua; Sam Deng; Daniel Hsu; Nakul Verma; Augustin Chaintreau (2020). Replication Data (A) for 'Biased Programmers or Biased Data?': Individual Measures of Numeracy, Literacy and Problem Solving Skill -- and Biographical Data -- for a Representative Sample of 200K OECD Residents [Dataset]. http://doi.org/10.7910/DVN/JAJ3CP
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 2, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Bo Cowgill; Fabrizio Dell'Acqua; Sam Deng; Daniel Hsu; Nakul Verma; Augustin Chaintreau
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.3/customlicense?persistentId=doi:10.7910/DVN/JAJ3CPhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.3/customlicense?persistentId=doi:10.7910/DVN/JAJ3CP

    Description

    This is a cleaned and merged version of the OECD's Programme for the International Assessment of Adult Competencies. The data contains individual person-measures of several basic skills including literacy, numeracy and critical thinking, along with extensive biographical details about each subject. PIAAC is essentially a standardized test taken by a representative sample of all OECD countries (approximately 200K individuals in total). We have found this data useful in studies of predictive algorithms and human capital, in part because of its high quality, size, number and quality of biographical features per subject and representativeness of the population at large.

  16. d

    Data from: Sampling schemes and drift can bias admixture proportions...

    • search.dataone.org
    • datadryad.org
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ken Toyama; Pierre-André Crochet; Raphaël Leblois (2025). Sampling schemes and drift can bias admixture proportions inferred by STRUCTURE [Dataset]. http://doi.org/10.5061/dryad.gf1vhhmkw
    Explore at:
    Dataset updated
    Jun 3, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Ken Toyama; Pierre-André Crochet; Raphaël Leblois
    Time period covered
    Jan 1, 2020
    Description

    The interbreeding of individuals coming from genetically differentiated but incompletely isolated populations can lead to the formation of admixed populations, having important implications in ecology and evolution. In this simulation study, we evaluate how individual admixture proportions estimated by the software structure are quantitatively affected by different factors. Using various scenarios of admixture between two diverging populations, we found that unbalanced sampling from parental populations may seriously bias the inferred admixture proportions; moreover, proportionally large samples from the admixed population can also decrease the accuracy and precision of the inferences. As expected, weak differentiation between parental populations and drift after the admixture event strongly increase the biases caused by uneven sampling. We also show that admixture proportions are generally more biased when parental populations unequally contributed to the admixed population. Finally, w...

  17. Z

    Hate Speech and Bias against Asians, Blacks, Jews, Latines, and Muslims: A...

    • data-staging.niaid.nih.gov
    • data.niaid.nih.gov
    Updated Oct 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jikeli, Gunther; Karali, Sameer; Soemer, Katharina (2023). Hate Speech and Bias against Asians, Blacks, Jews, Latines, and Muslims: A Dataset for Machine Learning and Text Analytics [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_8147307
    Explore at:
    Dataset updated
    Oct 26, 2023
    Dataset provided by
    Indiana University Bloomington
    Authors
    Jikeli, Gunther; Karali, Sameer; Soemer, Katharina
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Institute for the Study of Contemporary Antisemitism (ISCA) at Indiana University Dataset on bias against Asians, Blacks, Jews, Latines, and Muslims

    The ISCA project compiled this dataset using an annotation portal, which was used to label tweets as either biased or non-biased, among other labels. Note that the annotation was done on live data, including images and context, such as threads. The original data comes from annotationportal.com. They include representative samples of live tweets from the years 2020 and 2021 with the keywords "Asians, Blacks, Jews, Latinos, and Muslims". A random sample of 600 tweets per year was drawn for each of the keywords. This includes retweets. Due to a sampling error, the sample for the year 2021 for the keyword "Jews" has only 453 tweets from 2021 and 147 from the first eight months of 2022 and it includes some tweets from the query with the keyword "Israel." The tweets were divided into six samples of 100 tweets, which were then annotated by three to seven students in the class "Researching White Supremacism and Antisemitism on Social Media" taught by Gunther Jikeli, Elisha S. Breton, and Seth Moller at Indiana University in the fall of 2022, see this report. Annotators used a scale from 1 to 5 (confident not biased, probably not biased, don't know, probably biased, confident biased). The definitions of bias against each minority group used for annotation are also included in the report. If a tweet called out or denounced bias against the minority in question, it was labeled as "calling out bias." The labels of whether a tweet is biased or calls out bias are based on a 75% majority vote. We considered "probably biased" and "confident biased" as biased and "confident not biased," "probably not biased," and "don't know" as not biased.
    The types of stereotypes vary widely across the different categories of prejudice. While about a third of all biased tweets were classified as "hate" against the minority, the stereotypes in the tweets often matched common stereotypes about the minority. Asians were blamed for the Covid pandemic. Blacks were seen as inferior and associated with crime. Jews were seen as powerful and held collectively responsible for the actions of the State of Israel. Some tweets denied the Holocaust. Hispanics/Latines were portrayed as being in the country illegally and as "invaders," in addition to stereotypical accusations of being lazy, stupid, or having too many children. Muslims, on the other hand, were often collectively blamed for terrorism and violence, though often in conversations about Muslims in India.

    Content:

    This dataset contains 5880 tweets that cover a wide range of topics common in conversations about Asians, Blacks, Jews, Latines, and Muslims. 357 tweets (6.1 %) are labeled as biased and 5523 (93.9 %) are labeled as not biased. 1365 tweets (23.2 %) are labeled as calling out or denouncing bias. 1180 out of 5880 tweets (20.1 %) contain the keyword "Asians," 590 were posted in 2020 and 590 in 2021. 39 tweets (3.3 %) are biased against Asian people. 370 tweets (31,4 %) call out bias against Asians. 1160 out of 5880 tweets (19.7%) contain the keyword "Blacks," 578 were posted in 2020 and 582 in 2021. 101 tweets (8.7 %) are biased against Black people. 334 tweets (28.8 %) call out bias against Blacks. 1189 out of 5880 tweets (20.2 %) contain the keyword "Jews," 592 were posted in 2020, 451 in 2021, and ––as mentioned above––146 tweets from 2022. 83 tweets (7 %) are biased against Jewish people. 220 tweets (18.5 %) call out bias against Jews. 1169 out of 5880 tweets (19.9 %) contain the keyword "Latinos," 584 were posted in 2020 and 585 in 2021. 29 tweets (2.5 %) are biased against Latines. 181 tweets (15.5 %) call out bias against Latines. 1182 out of 5880 tweets (20.1 %) contain the keyword "Muslims," 593 were posted in 2020 and 589 in 2021. 105 tweets (8.9 %) are biased against Muslims. 260 tweets (22 %) call out bias against Muslims.

    File Description:

    The dataset is provided in a csv file format, with each row representing a single message, including replies, quotes, and retweets. The file contains the following columns:
    'TweetID': Represents the tweet ID.
    'Username': Represents the username who published the tweet (if it is a retweet, it will be the user who retweetet the original tweet.
    'Text': Represents the full text of the tweet (not pre-processed). 'CreateDate': Represents the date the tweet was created.
    'Biased': Represents the labeled by our annotators if the tweet is biased (1) or not (0). 'Calling_Out': Represents the label by our annotators if the tweet is calling out bias against minority groups (1) or not (0). 'Keyword': Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.

    Licences

    Data is published under the terms of the "Creative Commons Attribution 4.0 International" licence (https://creativecommons.org/licenses/by/4.0)

    Acknowledgements

    We are grateful for the technical collaboration with Indiana University's Observatory on Social Media (OSoMe). We thank all class participants for the annotations and contributions, including Kate Baba, Eleni Ballis, Garrett Banuelos, Savannah Benjamin, Luke Bianco, Zoe Bogan, Elisha S. Breton, Aidan Calderaro, Anaye Caldron, Olivia Cozzi, Daj Crisler, Jenna Eidson, Ella Fanning, Victoria Ford, Jess Gruettner, Ronan Hancock, Isabel Hawes, Brennan Hensler, Kyra Horton, Maxwell Idczak, Sanjana Iyer, Jacob Joffe, Katie Johnson, Allison Jones, Kassidy Keltner, Sophia Knoll, Jillian Kolesky, Emily Lowrey, Rachael Morara, Benjamin Nadolne, Rachel Neglia, Seungmin Oh, Kirsten Pecsenye, Sophia Perkovich, Joey Philpott, Katelin Ray, Kaleb Samuels, Chloe Sherman, Rachel Weber, Molly Winkeljohn, Ally Wolfgang, Rowan Wolke, Michael Wong, Jane Woods, Kaleb Woodworth, and Aurora Young. This work used Jetstream2 at Indiana University through allocation HUM200003 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.

  18. f

    Data from: Fibromyalgia diagnosis and biased assessment: Sex, prevalence and...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Sep 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rasker, Johannes J.; Walitt, Brian; Wolfe, Frederick; Häuser, Winfried; Perrot, Serge (2018). Fibromyalgia diagnosis and biased assessment: Sex, prevalence and bias [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000642697
    Explore at:
    Dataset updated
    Sep 13, 2018
    Authors
    Rasker, Johannes J.; Walitt, Brian; Wolfe, Frederick; Häuser, Winfried; Perrot, Serge
    Description

    PurposeMultiple clinical and epidemiological studies have provided estimates of fibromyalgia prevalence and sex ratio, but different criteria sets and methodology, as well as bias, have led to widely varying (0.4%->11%) estimates of prevalence and female predominance (>90% to <61%). In general, studies have failed to distinguish Criteria based fibromyalgia (CritFM) from Clinical fibromyalgia (ClinFM). In the current study we compare CritFM with ClinFM to investigate gender and other biases in the diagnosis of fibromyalgia.MethodsWe used a rheumatic disease databank and 2016 fibromyalgia criteria to study prevalence and sex ratios in a selection biased sample of 1761 referred and diagnosed fibromyalgia patients and in an unbiased sample of 4342 patients with no diagnosis with respect to fibromyalgia. We compared diagnostic and clinical variables according to gender, and we reanalyzed a German population study (GPS) (n = 2435) using revised 2016 criteria for fibromyalgia.ResultsIn the selection-biased sample of referred patients with fibromyalgia, more than 90% were women. However, when an unselected sample of rheumatoid arthritis (RA) patients was studied for the presence of fibromyalgia, women represented 58.7% of fibromyalgia cases. Women had slightly more symptoms than men, including generalized pain (36.8% vs. 32.4%), count of 37 symptoms (4.7 vs. 3.7) and mean polysymptomatic distress scores (10.2 vs. 8.2). We also found a linear relation between the probability of being females and fibromyalgia and fibromyalgia severity. Women in the GPS represented 59.2% of cases.DiscussionThe perception of fibromyalgia as almost exclusively (≥90%) a women’s disorder is not supported by data in unbiased studies. Using validated self-report criteria and unbiased selection, the female proportion of fibromyalgia cases was ≤60% in the unbiased studies, and the observed CritFM prevalence of fibromyalgia in the GPS was ~2%. ClinFM is the public face of fibromyalgia, but is severely affected by selection and confirmation bias in the clinic and publications, underestimating men with fibromyalgia and overestimating women. We recommend the use of 2016 fibromyalgia criteria for clinical diagnosis and epidemiology because of its updated scoring and generalized pain requirement. Fibromyalgia and generalized pain positivity, widespread pain (WPI), symptom severity scale (SSS) and polysymptomatic distress (PSD) scale should always be reported.

  19. Utrecht Fairness Recruitment dataset

    • kaggle.com
    zip
    Updated Mar 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ICT Institute (2025). Utrecht Fairness Recruitment dataset [Dataset]. https://www.kaggle.com/datasets/ictinstitute/utrecht-fairness-recruitment-dataset
    Explore at:
    zip(47198 bytes)Available download formats
    Dataset updated
    Mar 11, 2025
    Authors
    ICT Institute
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Utrecht
    Description

    This dataset is a purely synthetic dataset created to help educators and researchers understand fairness definitions. It is a convenient way to illustrate differences between different definitions, such as fairness through unawareness, group fairness, statistical parity, predictive parity equalised odds or treatment equality. The dataset contains multiple sensitive features: age, gender and lives-near-by. These can be combined to define many different sensitive groups. The dataset contains the decisions of five example decisions methods that can be evaluated. When using this dataset, you do not need to train your own methods. Instead you can focus on evaluation the existing models.

    This dataset is described and analysed in the following paper. Please cite this paper when using this dataset:

    *Burda, P and Van Otterloo, S. 2024. * Fairness definitions explained and illustrated with examples. Computers and Society Research Journal, 2025 (2). [https://doi.org/10.54822/PASR6281 ]

  20. n

    Data from: Biodiversity Soup II: A bulk‐sample metabarcoding pipeline...

    • data.niaid.nih.gov
    • zenodo.org
    zip
    Updated Mar 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chunyan Yang; Kristine Bohmann; Xiaoyang Wang; Cai Wang; Nathan Wales; Zhaoli Ding; Shyam Gopalakrishnan; Wang Cai; Douglas W. Yu; Douglas Yu (2021). Biodiversity Soup II: A bulk‐sample metabarcoding pipeline emphasizing error reduction [Dataset]. http://doi.org/10.5061/dryad.ncjsxksrc
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 17, 2021
    Dataset provided by
    Globe Institute of Technology
    Kunming Institute of Zoology
    Authors
    Chunyan Yang; Kristine Bohmann; Xiaoyang Wang; Cai Wang; Nathan Wales; Zhaoli Ding; Shyam Gopalakrishnan; Wang Cai; Douglas W. Yu; Douglas Yu
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description
    1. Despite widespread recognition of its great promise to aid decision-making in environmental management, the applied use of metabarcoding requires improvements to reduce the multiple errors that arise during PCR amplification, sequencing, and library generation. We present a co-designed wet-lab and bioinformatic workflow for metabarcoding bulk samples that removes both false-positive (tag jumps, chimeras, erroneous sequences) and false-negative (‘dropout’) errors. However, we find that it is not possible to recover relative-abundance information from amplicon data, due to persistent species-specific biases.

    2. To present and validate our workflow, we created eight mock arthropod soups, all containing the same 248 arthropod morphospecies but differing in absolute and relative DNA concentrations, and we ran them under five different PCR conditions. Our pipeline includes qPCR-optimized PCR annealing temperature and cycle number, twin-tagging, multiple independent PCR replicates per sample, and negative and positive controls. In the bioinformatic portion, we introduce Begum, which is a new version of DAMe (Zepeda-Mendoza et al. 2016. BMC Res. Notes 9:255) that ignores heterogeneity spacers, allows primer mismatches when demultiplexing samples, and is more efficient. Like DAMe, Begum removes tag-jumped reads and removes sequence errors by keeping only sequences that appear in more than one PCR above a minimum copy number per PCR. The filtering thresholds are user-configurable.

    3. We report that OTU dropout frequency and taxonomic amplification bias are both reduced by using a PCR annealing temperature and cycle number on the low ends of the ranges currently used for the Leray-FolDegenRev primers. We also report that tag jumps and erroneous sequences can be nearly eliminated with Begum filtering, at the cost of only a small rise in dropouts. We replicate published findings that uneven size distribution of input biomasses leads to greater dropout frequency and that OTU size is a poor predictor of species input biomass. Finally, we find no evidence for ‘tag-biased’ PCR amplification.

    4. To aid learning, reproducibility, and the design and testing of alternative metabarcoding pipelines, we provide our Illumina and input-species sequence datasets, scripts, a spreadsheet for designing primer tags, and a tutorial.

    Methods In this study, we tested the Begum pipeline with eight mock soups that differed in their absolute and relative DNA concentrations of 248 arthropod species. We metabarcoded the soups under five different PCR conditions that varied annealing temperatures (Ta) and PCR cycles, and we filtered the OTUs under different stringencies.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Filip Gesiarz; Donal Cahill; Tali Sharot (2023). Evidence accumulation is biased by motivation: A computational account [Dataset]. http://doi.org/10.1371/journal.pcbi.1007089
Organization logo

Evidence accumulation is biased by motivation: A computational account

Explore at:
22 scholarly articles cite this dataset (View in Google Scholar)
docxAvailable download formats
Dataset updated
Jun 3, 2023
Dataset provided by
PLOShttp://plos.org/
Authors
Filip Gesiarz; Donal Cahill; Tali Sharot
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

To make good judgments people gather information. An important problem an agent needs to solve is when to continue sampling data and when to stop gathering evidence. We examine whether and how the desire to hold a certain belief influences the amount of information participants require to form that belief. Participants completed a sequential sampling task in which they were incentivized to accurately judge whether they were in a desirable state, which was associated with greater rewards than losses, or an undesirable state, which was associated with greater losses than rewards. While one state was better than the other, participants had no control over which they were in, and to maximize rewards they had to maximize accuracy. Results show that participants’ judgments were biased towards believing they were in the desirable state. They required a smaller proportion of supporting evidence to reach that conclusion and ceased gathering samples earlier when reaching the desirable conclusion. The findings were replicated in an additional sample of participants. To examine how this behavior was generated we modeled the data using a drift-diffusion model. This enabled us to assess two potential mechanisms which could be underlying the behavior: (i) a valence-dependent response bias and/or (ii) a valence-dependent process bias. We found that a valence-dependent model, with both a response bias and a process bias, fit the data better than a range of other alternatives, including valence-independent models and models with only a response or process bias. Moreover, the valence-dependent model provided better out-of-sample prediction accuracy than the valence-independent model. Our results provide an account for how the motivation to hold a certain belief decreases the need for supporting evidence. The findings also highlight the advantage of incorporating valence into evidence accumulation models to better explain and predict behavior.

Search
Clear search
Close search
Google apps
Main menu