RR:C19 Evidence Scale rating by reviewer:
Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.
***************************************
Review:
The preprint “The Effect of Information Behavior in Media on Perceived and Actual Knowledge about the COVID-19 Pandemic” presents a study of the relationship between perceived COVID-19 threat, media usage, and perceived and actual COVID-19 knowledge. The authors show that threat perceptions are associated with a greater volume of media usage, and this media usage is associated with higher levels of perceived knowledge, but not actual knowledge about COVID-19.
The manuscript speaks to a very important topic. How do people learn factual information about COVID-19? Does the news media play a role in this process? The null finding presented here between the volume of news exposure and actual knowledge of COVID-19 is deeply troubling and should serve as a jumping off point for future research in other national contexts.
In this review I have been asked to evaluate the trustworthiness and reliability of the preprint. In broad strokes, I think the authors have done a reasonable job with this study. I think the findings are very useful if interpreted with an exploratory, descriptive lens. There are severe limitations to inferring causal relationships with cross-sectional data. The authors acknowledge this and try address these issues. However, these limitations must be kept in mind when evaluating the causal claims made in the manuscript. There are further issues surrounding measurement and theoretical clarity that need to be considered as well.
Theory
I am generally unclear as to the relationship between the theory, hypotheses, and the tests that were ultimately run. The theory section does a good job in anchoring our expectation that perceived threat should influence information acquisition (threat=>media volume) and to a lesser degree the relationship between media use and actual knowledge (media volume=>actual knowledge), but the expectations for media breadth vs. volume and perceived vs. actual knowledge are not developed at all, though these relationships are ultimately tested. The hypotheses themselves don’t speak to perceived knowledge, and H1 doesn’t speak to the distinction between breadth and volume, though that distinction is tested. Are some of these tests exploratory and others confirmatory? This lack of clarity ultimately makes it hard to interpret the results, especially when the authors attempt to imbue the findings with a causal interpretation.
Measurement
The measure of COVID-19 knowledge is excellent. I think it should be highlighted more in the main text. The descriptives alone are very interesting. I have some concerns about the measures for media exposure. The authors average across each media platform in order to establish a measure of volume and calculated Lorenz coefficients to measure breadth. The central issue is that the measure of volume used here is mechanically related to breadth such that it is measuring both concepts to some degree. I am not clear on why the authors didn’t use a crisper measure of media volume related to frequency respondents’ encounter any news media on COVID-19. And given the lack of theorizing on potential differences between breadth and volume, it is difficult to attach substantive meaning to the observed differences in the tests for breadth and volume.
Method
The authors make use of methods (i.e. propensity score matching) to account for the shortcomings of cross-sectional data as they pertain to causal inference. I understand why they chose this approach. But ultimately these methods are not a panacea.
First, matching on basic demographics is not going to address possible endogeneity. Is it not possible that media usage may increase threat perceptions by informing people of the risks of COVID-19? Second, there are other possible sources of confounding that should be part of the matching. One that comes foremost to mind is political interest or sophistication. I can imagine such a factor is correlated with media usage and causes perceived knowledge. There may be other sources of confounding. I have little confidence that media is causing perceived knowledge. Finally, recent scholarship has pointed to some serious problems with propensity score matching. King and Nielsen (2019), for instance, show that this method often increases imbalance, inefficiency and bias. At a minimum, the authors need to show robustness to other forms of matching, such as coarsened exact matching.
The authors should drop the mediation analysis, or at least heavily caveat this analysis as a purely descriptive exercise. Not only is the explanatory variable not randomly assigned, but neither is the mediator. Potential confounding exists at both levels. For more background on the challenges of mediation analysis I recommend Bullock et al. (2010) and Imai et al. (2010).
Small point: on page 9: “We further excluded participants that showed inconsistent response behavior within the questionnaire.” An example was given, but we need a full accounting of exclusion criteria.
Overall Assessment
Above I outline some concerns with the current preprint. These concerns relate mainly to interpreting this evidence as causal and confirmatory—an angle that the authors lean into deliberately throughout the paper. I do think this preprint provides a lot of value if it is interpreted as a descriptive, exploratory exercise. And there is absolutely nothing wrong with research that is descriptive and exploratory. The finding of a null association between media volume and actual knowledge is in of itself very interesting (and rather buried in this paper), and should spur further research. Reorienting the manuscript accordingly would make this preprint suitable for publication.
References
Bullock, J. G., Green, D. P., & Ha, S. E. (2010). Yes, but what's the mechanism? (don't expect an easy answer). Journal of Personality and Social Psychology, 98(4), 550-558.
Imai, K., Keele, L., & Tingley, D. (2010). A general approach to causal mediation analysis. Psychological Methods, 15(4), 309-334.
King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 27(4), 435–454.