RR:C19 Evidence Scale rating by reviewer:
Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.
Summary and strengths
The true burden of COVID-19 infection at any point during the pandemic has always been difficult to estimate. Research that clarifies true burden is even more urgent since the advent and popularity of at-home testing in early 2022, wherein neither tests nor results are reported. Public health officials have been left with essentially no method of reliably calculating the true burden of infection and thus no reliable data upon which to activate public health response(s), which carry their own economic, productivity and political costs. The preprint by Qasmieh and colleagues uses an imperfect but convenient and common sampling method to estimate testing behaviors and period prevalence during the 2022 Omicron BA.2 wave in New York. The authors’ use of survey weighting methods to correct for a limited set of demographic differences between their respondents and the Census American Community survey.
The authors calculate a 31-fold difference in actual BA.2 infections above the cases reported through official public health sources at the same time. This is slightly high, though not too far off an expected multiplier that combines under-ascertainment estimates from seroprevalence studies early on (pre-OTC/rapid antigen test (RAT)), NYC multiplier of 6x (https://academic.oup.com/cid/article/73/10/1831/6152134) in combination with estimated RAT:NAAT ratio of 5:1 (5x) would be roughly a 30x multiplier, using back-of -the envelope calculations. There is no authoritative source of the RAT:NAAT ratio, but during Omicron it may have ranged from 3:2 to 5:1 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8979595/); John Brownstein’s group at Harvard is doing some work in this area.
Selection bias is a commonly encountered challenge with telephone-based sampling which has worsened over time, as has been borne out in erroneous polling projections in the past several U.S. election cycles; this effect may also be stronger around surveys with politically controversial topics, which sadly has come to include COVID-19 and educational attainment as a key confounder (https://www.pewresearch.org/methods/2017/05/15/what-low-response-rates-mean-for-telephone-surveys). The contact mode (cell/landline) response rates could also introduce selection bias; other studies with similar designs have found that cell phone respondents answering internet surveys more likely to be younger, higher income, better educated, and white (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4994958/). It would have been much more transparent if the authors included the response rate to their text messages and robocalls -- overall, and by demographics, by contact mode (cell/landline), and also probed the extent to which their main outcomes differed by communication modality if at all. It is reasonable to imagine that persons with recent relevant experience with COVID, including infection and testing, would be more interested in participating in a survey about their experience; that said, this bias is at least partially mitigated because the topic of COVID is not actually mentioned until the first question is asked.
The case definition used by the authors is not consistent with official COVID-19 case definitions (https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2020/). For example, the authors appear to include “possible” cases of COVID in their prevalence estimates (this is not always 100% clear throughout), and their definition of “possible” is less stringent than the CDC definition of “probable” which uses a more specific combination of clinical symptoms. Caution is therefore warranted when making direct comparison of this study’s rates or percentages to the official sources like the NYC official Department of Health data, which does not include such “possible” cases.
It’s sometimes unclear in various tables what values are weighted and which are unweighted – for example, in the methods text it states that the survey weights were applied to both the sample characteristics and the prevalence estimates, but it appears that in table 1, only the prevalence estimates were weighted. Better in-table footnotes about weighting would have been appreciated throughout.
There are also no survey weighting factors to correct for key confounding variables like vaccinated vs. unvaccinated status.
Regarding ACS estimates, while weighting the survey population was an important way to increase the robustness of their results, the ACS estimates available only go up to 2019, which are pre-pandemic estimates. During the pandemic, large migrations of the population occurred between urban, suburban, and rural areas, especially in New York (https://comptroller.nyc.gov/reports/the-pandemics-impact-on-nyc-migration-patterns/), that could have affected the accuracy of weights used.
All results described in this study are bivariate associations. This study would have benefitted from some kind of adjustment model, such as survey-weighted logistic regression with an outcome of positive COVID test. This would allow for simultaneous adjustment of the various variables that were shown to be statistically associated with higher prevalence which would help characterize which explanatory factors may in fact be driving other apparent ones found in the simple bivariate cross-tabulations. Although such a model is not strictly necessary, and might be fit for a future paper, the lack of such adjustment make the types and directionality of associations reported in this paper confusing and difficult to interpret.