RR:C19 Evidence Scale rating by reviewer:
Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.
The Ohio Vaccine Lottery and Starting Vaccination Rates gives two estimates of the effect of Ohio’s lottery-based vaccine incentive program on starting vaccination rates. The first, a difference-in-differences estimate of the effect in counties bordering Indiana, is based on comparison to Indiana’s Ohio-bordering counties. The second, a synthetic control estimate of the effect on Ohio as a whole, is based on comparison to a control constructed as a weighted average of counties in Pennsylvania, Michigan, and Indiana. Both estimates suggest an increase of roughly 70 vaccinations per 10,000 residents in the first two weeks of the program. I am not confident that these effect estimates are accurate.
A plot of average vaccination rates over time in the Indiana and Ohio border counties considered (Figure 1) shows the Ohio counties rates to be increasing relative to Indiana’s in the month preceding the onset of Ohio’s incentive program. If we extrapolate this trend forward, it looks like it accounts for roughly half of the growth the diff-in-diff estimate attributes to the program. This is probably most visible in the event study plot (Figure 2, Panel A).
The synthetic control approach to estimate the state-wide effect can work around problems like this in some cases, by constructing a control that tracks the trends we see in Ohio before onset. But it does not always, and the information given does not make it clear that it has. The control used here is a (population-weighted) average of individual synthetic controls constructed for each Ohio county, and while Figure 3 shows how several Ohio counties track with their synthetic controls, there is no such figure for the aggregate comparison that the state-wide estimate is based on: all of Ohio vs this average of county-specific synthetic controls. It would be good to have a sense of what the (aggregate) synthetic control weights look like, so we know how sensitive this estimate is to what we know about individual control counties. That dropping Pennsylvania from the set of controls resulted in a large drop in the estimate (as discussed in footnote 13) suggests that it might be very sensitive to what happened in some PA county or counties. On that note, it’d be good to know what information the 1% adjustment for PA’s use of the J&J vaccine is based on. If that is based on state-level aggregate use, and that doesn’t reflect what is happening in the PA counties that contribute most to the synthetic control, the way first doses are inferred in PA could be an issue.
The placebo test figure here, which shows a shift in the distribution of synthetic control estimates for Ohio counties vs control counties, is a bit difficult to interpret — especially if your interest is in the statewide effect. What it implies about aggregates depends on the way county population correlates with county-level treatment effects. It might make sense to include something that targets the statewide estimate itself, more like the placebo tests proposed by Abadie, Diamond, and Hainmueller for synthetic control estimates, in which a single estimate is situated in a distribution that we can think of loosely as a proxy for its distribution under the null. Maybe that proxy could be the distribution of synthetic control estimates you get when you permute treatment on the county or region level, although that may have some issues with spillover.