Skip to main content
SearchLoginLogin or Signup

Review 1: "Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online"

Paying attention to the accuracy of information will increase sharing discernment on social media, which reduces misinformation spread online. Both reviewers found the paper potentially informative, but one reviewer was concerned about the claims made based on its methodology.

Published onMay 07, 2021
Review 1: "Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online"
1 of 2
key-enterThis Pub is a Review of
Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online

Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deployable interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) for whom these prompts are more versus less effective. In a large survey experiment examining participants’ intentions to share true and false headlines about COVID-19, we identify a variety of different accuracy prompts that successfully increase sharing discernment across a wide range of demographic subgroups while maintaining user autonomy. Research questions•There is mounting evidence that inattention to accuracy plays an important role in the spread of misinformation online. Here we examine the utility of a suite of different accuracy prompts aimed at increasing the quality of news shared by social media users.•Which approaches to shifting attention towards accuracy are most effective? •Does the effectiveness of the accuracy prompts vary based on social media user characteristics? Assessing effectiveness across subgroups is practically important for examining the generalizability of the treatments, and is theoretically important for exploring the underlying mechanism.Essay summary•Using survey experiments with N=9,070 American social media users (quota-matched to the national distribution on age, gender, ethnicity, and geographic region), we compared the effect of different treatments designed to induce people to think about accuracy when deciding what news to share. Participants received one of the treatments (or were assigned to a control condition), and then indicated how likely they would be to share a series of true and false news posts about COVID-19. •We identified three lightweight, easily-implementable approaches that each increased sharing discernment (the quality of news shared, measured as the difference in sharing probability of true versus false headlines) by roughly 50%, and a slightly more lengthy approach that increased sharing discernment by close to 100%. We also found that another approach that seemed promising ex ante (descriptive norms) was ineffective. Further-more, gender, race, partisanship, and concern about COVID-19 did not moderate effectiveness, suggesting that the accuracy prompts will be effective for a wide range of demographic subgroups. Finally, helping to illuminate the mechanism behind the effect, the prompts were more effective for participants who were more attentive, reflective, engaged with COVID-related news, concerned about accuracy, college-educated, and middle-aged. •From a practical perspective, our results suggest a menu of accuracy prompts that are effective in our experimental setting and that technology companies could consider testing on their own services.

RR:C19 Evidence Scale rating by reviewer:

  • Reliable. The main study claims are generally justified by its methods and data. The results and conclusions are likely to be similar to the hypothetical ideal study. There are some minor caveats or limitations, but they would/do not change the major claims of the study. The study provides sufficient strength of evidence on its own that its main claims should be considered actionable, with some room for future revision.



This study analyses the impact of users’ attention to accuracy on their sharing discernment of Covid-19 news. The study is innovative as it focuses on new ideas and solutions to reduce Covid-19 misinformation online. It does so by using an experiment conducted within a survey (or survey experiment) on social media users.

The use of a survey experiment is appropriate to establish the causal relationships in this study. There are, however, a few concerns that this study needs to pay attention to. Firstly, randomization is a very important process in an experiment. The study should explain how it randomly assigned the 9070 respondents to one or more experimental conditions. Secondly, the study conducted five waves of data collection (and stated that two treatments were administered in multiple waves), but the rationale for collecting data for each stage was not provided. Thirdly, the researchers administered eight experimental treatments in this study. I wonder why the last two treatments (no. 7 and no. 8) have multiple exposures. What makes the researchers decide that ‘partisan norms’ should be combined with selected conditions only and in a different order? Fourthly, an experiment is often lauded for its high internal validity whereas a survey is often lauded for its high external validity. How does the study plan to balance the two in this survey experiment? How do the authors ensure that differences between groups are caused by the manipulation rather than the differences between individuals in the groups? These issues should be addressed in this study.

In addition, this study briefly states that sharing discernment or ‘the quality of news shared’ is measured by the difference in sharing probability of true versus false headlines. The measure is plausible. I am of the opinion that deciding what is true and false is very tricky nowadays. This is because fake news aims to mislead people to think what is false is true. Hence, paying attention to accuracy alone may not lead to sharing discernment because participants may not know how to differentiate between truth and lies. Perhaps, this explains why this study found a disconnect between accuracy judgments and sharing intentions. The researchers need to consider this factor when explaining the results of the study.

It is important to note that a priming experiment is difficult particularly because the researchers cannot be certain that the prime affects subjects as they intended. The researchers must explain how they overcome or minimize the problems associated with the priming experiment in their study. Scholars (eg.: Diaz, Grady & Kuklinski, 2020) suggest the use of priming experiments as part of a factorial experiment to ensure information equivalence and reduce confounding the prime with associated factors. I can see the possibility of designing a factorial experiment for this study. The researchers can arrange the treatment groups according to ‘without additional information provided’ and ‘with some information related to accuracy provided’. By conducting a factorial experiment, the researchers will be able to evaluate interaction effects and draw better conclusions from the study.

The study is novel and has great potential for publication.

No comments here
Why not start the discussion?