Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The fight against misinformation and disinformation has developed substantially over the last decade, from the evolution of debiasing techniques (Lewandowsky, Ecker, Seifert, Schwarz & Cook, 2012) to the implementation of preventive interventions such as inoculation (McGuire, 1964; Roozenbeek, van der Linden, & Nygren, 2020) and news literacy (Vraga & Bode, 2020; Guess et al., 2020; Lutzke et al., 2019).
At the same time, attention has shifted markedly towards the dissemination of false information on social media, resulting in the adaptation of these interventions to these rapidly changing environments. Examples of social media interventions include the use of warnings or flags for unverified or debunked information (Clayton et al., 2019), the wisdom of crowds (Allen, Arechar, Pennycook & Rand, 2020) and attention priming (Pennycook, Epstein, Mosleh, Arechar, Eckes & Rand, 2019).
Recently, a new type of media literacy intervention, civic online reasoning (Wineburg & McGrew, 2017), has proven very effective in countering disinformation among high school and college students (McGrew, Breakstone, Ortega, Smith & Wineburg, 2018; McGrew, Smith, Breakstone, Ortega & Wineburg, 2019), as well as elderly citizens (Moore & Hancock, 2021).
This intervention, based on learning professional fact-checking techniques, has the advantage that it can be used when misinformation is deceptively sophisticated or difficult to detect. Despite extensive study of civic online reasoning offline, there has been little application of these techniques on social media.
The present study aims to adapt to social media some of the strategies of civic online reasoning, specifically lateral reading (leaving a website and opening of new tabs along a horizontal axis in order to use the resources of the Internet to learn more about a site and its claims) and click restraint (skipping the first search results of a browser search ). We test these strategies against scientific misinformation, which is a perfect testing ground for online interventions, as scientific content is often difficult for laypeople to access.
As a complementary hypothesis, we test whether evaluation accuracy can be increased by giving participants monetary rewards for correct answers. This incentivized condition will serve as a benchmark for fact checking techniques. In addition, the incentives will allow us to test whether motivation and attention, triggered by the presence of an incentive, are sufficient to increase accurate responses.
Participants are shown an interactive Facebook post, which links to an article presenting information related to science, and are asked to rate how scientifically valid the content of the post is.
The scientific validity of the posts is determined through a predefined procedure based on a series of quality checks, such as whether the scientific claims have been peer-reviewed, whether the authors making the claim are competent on the specific topic, and whether the media reporting is faithful to the original article.
The post will present a title in form of a scientific claim, and a caption from the article that elaborates on that claim. Participants are randomly assigned to one of three experimental conditions. In the control condition, participants observe and interact with the Facebook post, and are asked to rate the scientific validity of the post on a scale from 1 to 6. The monetary incentive condition is identical to the control condition, with the difference that participants are paid a fixed sum if their answer is correct. Finally, the pop-up condition is identical to the control condition, with the difference that before reading the post participants also observe a pop-up suggesting civic online reasoning strategies such as lateral reading and click restraint. At the end of the experiment, after the participants have completed a series of control questionnaires, we debrief them on the scientific validity of the post.
In summary, the study tests whether fact checking techniques, as opposed to monetary incentives, are effective tools for understanding the scientific validity of social media content.
This project is centred on the triangle of interactive relationships between citizens affective political and social polarization, citizens political distrust in the main institutions and actors of political representation, and the politics of party/elite competition, producing problematic dynamics and consequences for the quality, functioning, and even potential survival of liberal contemporary democracies. This study contains an innovative and comprehensive investigation of concepts such as political trust, affective polarization and the politics of party competition and the dynamics and interactions among them in Spain, Portugal, Italy and two Latin American countries (Chile and Argentina). For this goal we compiled new theoretical arguments building up from recent methodological innovations on this topic from leading research in the US, and putting together a team of country experts from different disciplines such as political science, public opinion, political psychology, survey methodology, political communication, data engineers and big data experts. An additional value of this project is the use of a multi-method approach that combines the design and implementation of an original online panel survey with three different waves, with innovative survey questions together with the application of embedded experiments. Furthermore, we want to match the preceding individual-level data with information collected with a passive tracking application (a passive meter), which captures real individual behaviors and exposure to information received via mass electronic and social media. Finally, we use techniques of computer-aided text analysis (CATA), to conduct analysis of the sources of information to which respondents have been exposed.
The results of a survey held in summer 2022 found that the main way to verify news was using Google or another search engine. Fact-checking by reading further on the topic or finding information from experts were also popular strategies, especially for news found on social media.
This statistic gives information on social media discoveries that led to hiring professionals extending an offer to a job candidate. During a survey in May 2018, it was found that 33 percent of respondents who researched candidates on social media extended an offer after finding the candidate's social media profile conveyed a professional image.
This statistic illustrates differences of opinion concerning the credibility of information provided by various sources in France in 2015, depending on respondents' socio-professional category. It reveals that people from upper socio-professional categories trusted information provided by television less than people from lower social categories (eleven points difference).
According to a survey by Vodus on electric vehicle ownership in Malaysia, among the main sources for information about an electric vehicle, social media was stated as the most important source by 39 percent of respondents. Car magazines and expert opinion were the fourth and fifth most important sources among those who were looking for information regarding electric vehicles.
Publishing and monetizing original content on social platforms has become easier now than ever before. In 2020, the vast majority of content creators worldwide were amateurs who earned from making content part-time. Meanwhile, approximately 500 thousand professionals earned a living completely from publishing content on Instagram. Instagram was by far the most popular outlet among influencers, with 30 million amateur creators monetizing content on the Facebook-owned platform that year.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The fight against misinformation and disinformation has developed substantially over the last decade, from the evolution of debiasing techniques (Lewandowsky, Ecker, Seifert, Schwarz & Cook, 2012) to the implementation of preventive interventions such as inoculation (McGuire, 1964; Roozenbeek, van der Linden, & Nygren, 2020) and news literacy (Vraga & Bode, 2020; Guess et al., 2020; Lutzke et al., 2019).
At the same time, attention has shifted markedly towards the dissemination of false information on social media, resulting in the adaptation of these interventions to these rapidly changing environments. Examples of social media interventions include the use of warnings or flags for unverified or debunked information (Clayton et al., 2019), the wisdom of crowds (Allen, Arechar, Pennycook & Rand, 2020) and attention priming (Pennycook, Epstein, Mosleh, Arechar, Eckes & Rand, 2019).
Recently, a new type of media literacy intervention, civic online reasoning (Wineburg & McGrew, 2017), has proven very effective in countering disinformation among high school and college students (McGrew, Breakstone, Ortega, Smith & Wineburg, 2018; McGrew, Smith, Breakstone, Ortega & Wineburg, 2019), as well as elderly citizens (Moore & Hancock, 2021).
This intervention, based on learning professional fact-checking techniques, has the advantage that it can be used when misinformation is deceptively sophisticated or difficult to detect. Despite extensive study of civic online reasoning offline, there has been little application of these techniques on social media.
The present study aims to adapt to social media some of the strategies of civic online reasoning, specifically lateral reading (leaving a website and opening of new tabs along a horizontal axis in order to use the resources of the Internet to learn more about a site and its claims) and click restraint (skipping the first search results of a browser search ). We test these strategies against scientific misinformation, which is a perfect testing ground for online interventions, as scientific content is often difficult for laypeople to access.
As a complementary hypothesis, we test whether evaluation accuracy can be increased by giving participants monetary rewards for correct answers. This incentivized condition will serve as a benchmark for fact checking techniques. In addition, the incentives will allow us to test whether motivation and attention, triggered by the presence of an incentive, are sufficient to increase accurate responses.
Participants are shown an interactive Facebook post, which links to an article presenting information related to science, and are asked to rate how scientifically valid the content of the post is.
The scientific validity of the posts is determined through a predefined procedure based on a series of quality checks, such as whether the scientific claims have been peer-reviewed, whether the authors making the claim are competent on the specific topic, and whether the media reporting is faithful to the original article.
The post will present a title in form of a scientific claim, and a caption from the article that elaborates on that claim. Participants are randomly assigned to one of three experimental conditions. In the control condition, participants observe and interact with the Facebook post, and are asked to rate the scientific validity of the post on a scale from 1 to 6. The monetary incentive condition is identical to the control condition, with the difference that participants are paid a fixed sum if their answer is correct. Finally, the pop-up condition is identical to the control condition, with the difference that before reading the post participants also observe a pop-up suggesting civic online reasoning strategies such as lateral reading and click restraint. At the end of the experiment, after the participants have completed a series of control questionnaires, we debrief them on the scientific validity of the post.
In summary, the study tests whether fact checking techniques, as opposed to monetary incentives, are effective tools for understanding the scientific validity of social media content.