This preprint explores important issues regarding right-wing political movements and their impacts on COVID-19 cases, however, both reviewers raise concerns about the theoretical and analytic approaches used.
RR:C19 Evidence Scale rating by reviewer:
Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.
The manuscript at hand investigates how FPÖ's public COVID stance affected citizens' willingness to follow COVID-19 rules. It is argued that FPÖ voters follow the rules while the FPÖ supported the restrictions but violated them after the FPÖ publically declared opposition. Patrick Mellacher utilizes district-level data to show that vote shares of the FPÖ correlate with higher numbers of deaths after their policy shift. In addition, Patrick is extending the classical SIRD model to simulate the behavior of different groups.
This review is written through the lens of an empirically working political scientist, focusing on causal inference. Overall, the manuscript is interesting and has a lot to offer. It is an enjoyable read and it is a neat empirical investigation of an interesting working hypothesis.
However, I believe the manuscript could be much stronger in terms of theory and empirical evidence.
First, I believe the definition of populism is quite misleading. There is a huge literature on populism. There are plenty of definitions, but the used definition is neglecting the existing definitions. There are arguments that populism is an ideology (Mudde 2004), a discourse (Heinisch and Mazzoleni 2017; Jagers and Walgrave 2007), a style (Moffitt and Tormey 2014), an organizational feature (Kenny 2018; Weyland 2001), or a set of ideas (Hawkins and Rovira Kaltwasser 2018). Given the research question, the set of ideas/ideology aspects of populism seem most plausible and best to understand the problem. Here, populism contains two to three aspects: Anti-elitism, people-centrism, and a Manichean outlook (Hawkins and Rovira Kaltwasser 2018) /popular sovereignty (Mudde and Rovira Kaltwasser 2017).
This brings the reader to the next question: what is it about populism that is of relevance here? The manuscript’s theory is an argument of party cues; and has little to do with populism. While populism could offer an interesting explanation, the article’s core argument has little to do with this. The author formulates an argument on party cues. There is strong literature putting forward arguments on how partisanship affects citizens’ behavior. However, classic studies such as Campbell et al. (1960) are by and large neglected. Kam (2005) Bechtel (see, for example, Bechtel et al. 2015), and also see Aaroe (2012) are some of the excellent and important studies in the field.
The study then continues to argue that the U-turn performed by the FPÖ lead to a substantial change in COVID-19 behavior. Readers may wonder, who performed a U-turn and why. While the policy change of the FPÖ was most radical and obvious, other opposition parties also changed their course and withdrew support for COVID-19 measures. The SPÖ and NEOS for example also changed their views and criticized the government. Thus, it may make sense to understand the extent to which the phenomenon is a function of FPÖ partisanship and strongholds or whether the findings at hand are a function of opposition vote share? This could be explored in detail in additional robustness tests.
This brings me to another issue with the theory. The argument is on an individual level. That is, depending on partisanship, individuals change their behavior and compliance with the COVID-19 guides and rules. Yes, the empirical test is on an aggregated level. The manuscript would benefit from an open discussion of this. Other data, such as the Austrian Corona Panel Project (Kittel et al. 2020b, 2020a) would allow to trace the effects on an individual level and complement individual-level data with contextual factors. According to the theory, however, we would anticipate that partisanship explains the behavior.
The lack of guiding theory affects the clarity and credibility of the empirical analyses. While all the included variables are by themselves plausible, a clear theoretical discussion of why would include them and how this should be modeled is absent.
Austria is certainly an interesting case. The literature is too US-centric and understanding the relationship of party cues and behavior is important. However, this is not fully discussed in the manuscript and existing work, e.g. by Eberl and colleagues (2020) discusses this in more detail and may offer some inspiration.
Moving on to the empirics, the data is interesting. However, there are several aspects that require more discussion.
Starting with a very general point, why was the sample split into two groups? One may wonder why the U-turn was not modeled directly, i.e. include an interaction term of pre-post U-turn and vote share? This would be more convincing because one can directly compare the two phases and demonstrate the differences better. I understand why mathematically speaking this does not make too much of a difference. The sample split de facto means you interact all variables with the U-turn. This may be plausible for the FPÖ vote share. But why would, for example, an interaction make sense for population density?
The differences in Figure 1 are fairly small. The author could do a better job in contextualizing the findings (and particularly the estimates)? E.g. 5 percent more FPÖ vote equals approx. 20 deaths per 100,000 citizens more. Is this a lot? Can you provide more meaning to the numbers?
It is unclear why the sample was split on the 11th of May. Does the author try to incorporate a two-week lag for deaths to become visible in the data? This requires substantial debate and discussion. At the moment, it is not convincing to take a less obvious date to split the data rather than showing that this actual date of the 29th works and the analysis is robust to other dates.
Should the analyses control for the federal state? It would make sense to include Bundesland or even Bezirk-fixed effects?
The author could expand the discussion of the differences in parties’ policy positions? If all parties would hold the exact same positions, we would expect a sharp null-effect of partisan vote share when all parties are in line with the government's policies. But this does not hold empirically. The results suggest that the FPÖ strongholds perform systematically better. Thus there is an underlying variable explaining the performance of a district in terms of COVID-19 and FPÖ vote share. Thus, can we causally attribute the second period's effect on the FPÖ's policy change? Or could it be that districts that performed badly in the first period either a) had many infections which change the probability of new infections or b) led to an update in citizens' behavior? The causal identification strategy in this paper is fairly weak. On an argumentative level, the absence of a sharp null-effect in the first period runs against the focus on policy positions.
The problem of causal inference seems more problematic in the differences between deaths and infections. Is it plausible that this is self-selection? Most coefficients go in the expected direction; some are even significant. Thus, can we really assess the difference in coefficients? To some extent, the analysis stretches the data too much. The interpretation of several coefficients (e.g. Turkish-born population) follows this vain. I wonder whether this is causally linked to this population.
I cannot really judge the subsequent section with its simulations.
To wrap up, this is an interesting manuscript. I believe there could be a neat contribution.
Aaroe L (2012) When Citizens Go against Elite Directions: Partisan Cues and Contrast Effects on Citizens' Attitudes. Party Politics 18, 215–233.
Bechtel MM et al. (2015) Reality Bites: The Limits of Framing Effects for Salient and Contested Policy Issues. Political Science Research and Methods 3, 683–695.
Campbell A et al. (1960) The American Voter. New York: Wiley.
Eberl J-M, Huber RA and Greussing E (2020) From Populism to the 'Plandemic': Why Populists Believe in COVID-19 Conspiracies. SocArXiv.
Hawkins KA and Rovira Kaltwasser C (2018) Introduction: The Ideational Approach. In Hawkins KA et al. (eds), The Ideational Approach to Populism: Concept, Theory, And Method. Routledge, pp. 1–24.
Heinisch RC and Mazzoleni O (2017) Analysing and Explaining Populism: Bringing Frame, Actor and Context Back In. In Heinisch RC, Holtz-Bacha C and Mazzoleni O (eds), Political Populism: A Handbook, 1st edition. Baden-Baden: Nomos, pp. 105–123.
Jagers J and Walgrave S (2007) Populism as Political Communication Style: An Empirical Study of Political Parties' Discourse in Belgium. European Journal of Political Research 46, 319–345.
Kam CD (2005) Who Toes the Party Line? Cues, Values, and Individual Differences. Political Behavior 27, 163–182.
Kenny PD (2018) Populism in Southeast Asia. Cambridge: Cambridge University Press.
Kittel B et al. (2020a) Austrian Corona Panel Project (SUF Edition).
Kittel B et al. (2020b) The Austrian Corona Panel Project: Monitoring Individual and Societal Dynamics amidst the COVID-19 Crisis. European Political Science.
Moffitt B and Tormey S (2014) Rethinking Populism: Politics, Mediatisation and Political Style. Political Studies 62, 381–397.
Mudde C (2004) The Populist Zeitgeist. Government and Opposition 39, 542–563.
Mudde C and Rovira Kaltwasser C (2017) Populism: A Very Short Introduction. New York, NY: Oxford University Press.
Weyland K (2001) Clarifying a Contested Concept: Populism in the Study of Latin American Politics. Comparative Politics 34, 1–22.