RR:C19 Evidence Scale rating by reviewer:
Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.
This study reports that people’s support for continuation of government-imposed Covid-19 restrictions - the ‘new normal’ practices - correlates with their estimate of certain risks associated with the virus and their estimates of scientific consensus around these topics. People furthermore tended to rate these risks and the consensus about it as higher than the actually supported by current scientific literature.
In my view the investigation addresses an important question, namely whether the public’s perception of the Covid-19 risks are accurate and how these risk perceptions may relate to people’s support for government measures such as wearing masks. I would like to commend the author for giving voice to the concern that publicity around Covid-19 risks may overshoot the target and lead the public to make misinformed choices. I believe on a personal level it is a courageous thing to raise such important concerns in a polarized and aggressive public debate.
That being said, I have a number of concerns about the study that I think call for caution in terms of interpretation of the findings. The study claims that people overestimated Covid-19-related risks. The data that the study presents to support this is based on a fact quiz of sorts: asking respondents to estimate a set of numbers and percentages (Table 10), such as the percentage of Covid-19 deaths that were children or the percentage of people who recover without medical intervention. The result joins the ranks of a wealth of literature already showing people are notoriously poor at estimating quantities and these estimates tend to be biased by a range of cognitive and emotional factors.
My main concern about this part of the study (and other parts of the study with similar logic) is that respondents were tested on their factual knowledge not about Covid-19 on society as a whole but only the impact of Covid-19 on healthy non-elderly populations. This is mentioned in the title but not in the abstract and the rationale for this is not made sufficiently clear. The writing of the manuscript often fails to add this important specification, e.g. the abstract “people over-estimate Covid-19 risks” without specifying that they only over-estimated the Covid-19 risks in healthy non-elderly populations. This is crucial because what determines public pandemic policy is determined much more strongly by how Covid-19 affects the population (much) older than that. Put bluntly, what matters is not so much whether a young healthy adult has symptoms or not but whether the influx of elderly overwhelms the intensive care health system. What is more, previous questionnaire data (Rothwell & Desai 2020) confirms that people underestimate impact of Covid-19 on the population below 65 but overestimate its impact on 65 and older. Why was the impact of Covid-19 on the 65+ population omitted from the questionnaire in the study here, given that that impact is much more relevant for government health restrictions? And how about other facts that are crucial for policy, and that may easily be underestimated, such as: if there are 100 new cases every day, and no social distancing restrictions whatsoever are in place, after how many days would there be 10,000 new cases? Or: at what percentage of the population affected by Covid-19 would the US health system be overwhelmed and forced to turn people down at ICUs? These data points would presumably be much more relevant for policy decisions and their omission from the current questionnaire means that the results presented are partial at best and could be misleading to a reader who does not verify the original sources. Thus, I think the results from the present study should not be taken as a representative picture of how people view Covid-19 risks.
The study then claims that this supposed overestimation of the Covid-19 risk is correlated with people’s opinion about policy: the ‘new normal’. I believe the data support this correlational claim. However, care should be taken not to interpret this link as causal. In other words: one might be tempted to conclude that people’s opinion on ‘new normal’ policy is biased by incorrect, exaggerated knowledge of the facts of Covid-19. In addition to the concerns I raised above about how the knowledge was tested, such causal conclusions simply cannot be drawn from the current data. It is equally likely that people’s support or not for the ‘new normal’ is in actuality driven by their estimate of the Covid-19 risk to the elderly population, an epidemiologically more relevant data point, which was not tested here, and can plausibly spill over to bias risk estimations in younger cohorts. Based on the current data, we simply do not know what respondents’ opinions were based on. Although I believe the authors do not make such a causal claim explicitly, I think it is important to address this to avoid the reader inadvertently taking away that message.
In sum, I think this study courageously addresses a potential blind spot in how our societies handle Covid-19. It presents an interesting dataset that shows that people’s understanding of some selective part of the data surrounding Covid-19 is biased in a way that correlates with their willingness to support ongoing restrictions. Although I think the data is informative, I think there are crucial interpretation pitfalls that should be explicitly avoided - especially because a number of interest groups may be all too willing to come to such stronger, causal conclusions.
In research that directly concerns societal issues, funding should be fully disclosed as well as additional potential conflicts of interest. I could not find this information in the manuscript.