RR:C19 Evidence Scale rating by reviewer:
What I like about this paper
I will start by briefly stating what I like about the paper. The paper addresses an important topic of vaccine spillover effects. It uses several original datasets in order to approach this topic from a variety of angles, including a panel study, two observational studies, and a survey experiment. As a result, I believe claims are very well supported by the data and methods used. I also think that this manuscript does a good job positioning the authors’ argument in the broader theoretical and empirical findings of the (ever-expanding) COVID-19 literature.
What I think could use some work
Here, I will provide a few bullet points that I believe authors should address before publication, in no particular order. Some of these comments pertain to the theoretical argument, some to methods, while others to implications of this work. All of these are meant to push the authors and help to improve the manuscript.
The biggest issue with the paper is that the authors have a lot to work with, and that, at times, results in the paper reading like a laundry list of findings that were stitched together. The paper would benefit immensely from some streamlining of the discussion of the main research questions and the findings to these questions (and hypotheses) at the end.
I like how the authors measure vaccine skepticism, flu shot intention, and attitudes towards other, “hypothetical” vaccines. I would like the authors to state why were these picked exactly. Furthermore, there used to be a lyme disease vaccine on the market called LYMErix, before the makers were sued and they pulled it from the market. Do the authors think that this might play a role in shaping some of the results?
The authors note that, “For instance, the COVID-19 vaccine could be a stand-in for vaccines generally, or it could be a special case that differs from other vaccines. In this instance, we expect the former to have spillover effects onto other vaccines and not the latter.” But could it not be the latter, given how the most popular COVID vaccines were essentially the first of its kind (mRNA)?
This is more of a big picture comment that I do not necessarily think the authors need to address per se, but I think it highlights the need to tighten the theoretical justification for the study. Is the process through which COVID vaccine attitudes spill over to other vaccines this psychological “spillover” or is it simple priming? What would the difference be?
The authors say that they expect “arguments concerning COVID-19 vaccine mandates that are less specific to the COVID-19 vaccines themselves should spill over onto childhood vaccine attitudes, while arguments that justify a stance for or against the policy using considerations specific to COVID-19 (such as concerns over novelty) will not.” Why should that be the case. Would it be implausible to assume that a mere mention of vaccines and vaccine mandates would simply make vaccine and vaccine mandates, regardless of which vaccine, more top of mind considerations for respondents?
I would like the authors to provide more details on PureSpectrum, as I have never heard of this data vendor.
I am worried about ceiling effects in the experiment. I think COVID mandates were already pretty salient, so can the authors address any of my fears here that they actually succeeded in elevating the salience of the mandates?
I believe that footnote 5 would be more informative as a table (either in the body of the paper or in the appendix).
It should be more explicitly stated in the paper that the observational study of the hypothetical vaccines is separate and relies on entirely different data (Lucid). The authors mention it, but it could be clearer.
Figures 1 and 2 (and 3) would probably be easier to visually interpret (and see the polarization of responses) if the partisans were plotted on the same graph instead of in different panels.
I think given the importance of partisanship, I would include the version of Figure 5 with partisan breakdowns instead of the general population one in the main manuscript. That is the more substantively important finding.
I think that the authors could be clearer in articulating their findings, how their findings contribute to the existing work, and what it means. For example, when discussing the results of the experiment (and the subsequent section of the conclusion), I think it would be good to note how difficult it is, in our current media landscape, to talk about broad population level effects, especially on a politically salient and polarized issue. Is it reasonable to even speculate that the right wing media landscape will talk about COVID requirements vs mandates, when their sole purpose is to motivate their base by amplifying the language they use?