
Description
Reviewers: Tom Stafford (The University of Sheffield) , Charlotte O. Brand | 📗📗📗📗◻️ • Pierre Verger (Observatoire Regional de la Santé) | 📗📗📗📗◻️
Author Response to RR:C19 Reviews:
First, we would like to thank Tom Stafford, Charlotte Brand and Pierre Verger for their quick, positive, and relevant feedback! In the lines below we respond the best we could to their comments.
The most central criticism, raised by Tom Stafford and Charlotte Brand, is that the control condition does not allow us to conclude that it is interacting with the chatbot per se, instead of just having more detailed information, that led to more positive attitude change. Tom and Charlotte are correct. We cannot conclude, for instance, that the chatbot works better than a text. But we can conclude that it *works*. Which is, in itself, informative in an applied perspective. In a related paper, we tested the efficacy of an interactive chatbot with three controls, including a non-interactive chatbot where participants did not click on the arguments but instead scrolled through the chatbot’s text (Altay et al. 2020). In that paper, we found no evidence that interactivity plays a crucial role. For this reason, in the COVID-19 chatbot, participants had the option to scroll through the whole text instead of clicking on the arguments one by one. We have now added information about the popularity of this option in ESM: “One-third of users opted, at some point of their interaction with the chatbot, for the non-interactive version by selecting the option of browsing through the chatbot by scrolling instead of clicking on the questions (37%). However, this option was not used as an alternative to the interactive chatbot but rather as a complement: these participants clicked on the same number of questions as participants who used the interactive chatbot exclusively (M = 12, Median = 10).”
In sum, the current paper does not advocate for the superiority of interactive chatbots over other form of communication (e.g. discussing with a medical experts would probably work better). Instead, we show that answering people’s concern about the COVID-19 vaccines can positively change people’s minds, and that a chatbot (interactive or not) could be a way to do so. We have now tried to make this clearer in the manuscript, e.g. “Note that our design is not meant to compare the efficacy of an interactive Chatbot compared to a non-interactive Chatbot or a long text (see Altay et al. 2020 for such design). Instead, the present design is primarily meant to test the efficacy of a Chatbot to inform people about COVID-19 vaccines. The Control Condition allows us to control for potential demand biases.” or “Other ways of communicating information, e.g., short videos in a YouTube format, could be as efficient, if not more efficient, at capturing people’s attention and ultimately conveying information to the general public.”
Tom Stafford and Charlotte Brand also point out that the trust argument, where we explain our motives and who we are, could have had a big effect. We cannot directly answer this question with our design, but we know that 43% of users selected this question, suggesting that it is indeed relevant. It was actually one of the most selected question after the one on COVID-19 vaccines safety. We now write in ESM: “We were interested in exploring users’ behavior on the chatbot. The results described in this paragraph include 84 additional users who were not participants in the study, but who interacted with the chatbot while the study was being conducted (because the link could be shared openly after the experiment). These results should be interpreted qualitatively since the descriptive statistics reported below do not precisely represent participants' behaviors on the chatbot. On average, users clicked on 12 questions (Median = 10). The most clicked question related to COVID-19 vaccines safety, with 67% of users clicking on the question “Are COVID-19 vaccines safe?” The least clicked question related to COVID-19 vaccines efficacy, with 19% of users clicking on “Are COVID-19 vaccines effective?”. In between, 43% of users clicked on “Do I need to be vaccinated?” and “Why should I trust you?,” 32% of users clicked on “Can we trust the people who produce it?,” and 31% of users clicked on “Do we know enough about the COVID-19 vaccines?”
Note that we measured the efficacy of this paragraph in a related paper (Altay et al. 2020), and found that in a model with no other predictors, reading the “Why should I trust you?” paragraph led to more positive attitude change (b = 0.24 ± .10, p = 0.02).
Pierre Verger noted that participants initially holding more negative attitudes displayed more positive attitude changed and suggested that it is because they spent more time on the chatbot. Unfortunately, the present paper cannot answer this question. However, in a related paper, we tested the efficacy of an interactive chatbot compared to a non-interactive chatbot where participants did not click on the arguments but instead scrolled through the chatbot’s text (Altay et al. 2020). In that paper, we found no significant relation between initial attitudes and time spent interacting with the chatbot. Still, we found that interacting longer with the chatbot, and clicking on more arguments, led to more positive attitude change. Relatedly, and of potential interest for Pierre, we found that the best predictor of whether a participant selected a given argument in the chatbot was how negative their initial attitudes were regarding that argument. Thus, we know that participants (in that related paper) selected arguments that addressed their concerns. Now, to what extent can these results generalize to the present paper on COVID-19 vaccines? Given that we found similar results in both papers (e.g. participants initially holding more negative attitudes displayed more positive attitude changed) some generalizability seems permitted.
Pierre Verger asks whether the chatbot could help correct misinformation or confer resistance to misinformation. We can speculate that correcting misperceptions is within the chatbot’s reach. In a sense, it is what the chatbot is already doing: before interacting with the chatbot participants thought the vaccines to be less safe than they really were (misperception), and interacting with the chatbot helped reduce this misperception. But this is a misperception only in a weak sense, for instance we don’t know how confidently participants held this misperception. Regarding resistance to misinformation, we could test whether participants who have been exposed to the chatbot are somehow ‘innoculated’ (Roozenbeek et al. 2020) against future misinformation attempts.
Pierre Verger is also interested in the ““mechanisms” by which these changes were achieved”. Unfortunately, the present paper tells us very little about these mechanisms. It should be seen more as a proof of concept that a chatbot (interactive or not) can change people’s minds about COVID-19 vaccines than an in-depth investigation of why it works. However, we know from the related paper mentioned above (Altay et al. 2020) that the interactive nature of the chatbot is probably not crucial. We have added in the discussion of the present paper that it would be interesting to investigate the specific mechanisms that led to this positive attitude change: “First, its scope, as we did not investigate the mechanisms that led to the positive attitude change in the Chatbot Condition. Previous work suggests that the interactivity of the Chatbot is not central (Altay et al. 2020), but the dialogic format—which makes it easy to find relevant information—could be. In sum, this paper offers evidence that a chatbot can be used to inform people about the COVID-19 vaccines, but not why it is the case (for an investigation of these mechanisms see, Altay et al. 2020).”
Pierre Verger raises issue about the generalizability of our findings since we don’t have a representative sample of the French population. This is an important point from an applied perspective. For instance, it is reasonable to assume that the chatbot may work better on younger participants who more familiar with new technologies. This should be taken into account when using the chatbot in the wild. We have added in the limitations of the paper that it should not be assumed that the chatbot would work equally well among all segments of the population: “The third limitation regards its reception among diverse segments of the population. In contrast with a representative sample of the French population, our sample is younger (below 35: 46% [26%], between 35 and 65: 51% [51%], over 65: 3% [23%]), more educated (more than a high school diploma: 66% [53%], high school diploma: 23% [17%], less than a high school diploma: 10% [30%]), and more masculine (54% men [48%]). It is safe to assume that the chatbot can be used by a young and educated population. However, before deploying the chatbot at large scale in the general population, its efficacy should be tested on people with less than a high school diploma and, importantly, on people over 65 whose digital skills tend to be lower.”
Pierre Verger asks about the “conversion rate” of the chatbot: would people actually interact with the chatbot in the wild? This question is central, but has, to the best of our knowledge, no answer in the literature. Answering this question would allow for better estimate of the impact the chatbot could have if released in the wild. We have added the following paragraph in the limitations: “A second limitation of the present study is the unknown about its impact in the wild. Outside of experimental settings, we don’t know how willing people would be to interact with the chatbot. This metric is key to measure the chatbot’s conversion rate and have a good estimate chatbot’s potential impact if it were widely deployed. Other ways of communicating information, e.g., short videos in a YouTube format, could be as efficient, if not more efficient, at capturing people’s attention and ultimately conveying information to the general public.”
Finally, we would like to thank again Tom Stafford, Charlotte Brand and Pierre Verger for their helpful feedback! We hope that you will find the responses to your comments informative. We posted the revised version of the manuscript on PsyArXiv (version 4): https://psyarxiv.com/eb2gt.
References:
Altay, S., Schwartz, M., Hacquin, A.-S., Allard, A., Blancke, S., & Mercier, H.. (2020). Scaling up Interactive Argumentation by Providing Counterarguments with a Chatbot [Registered Report Stage 1 Protocol] (Version 1). figshare. https://doi.org/10.6084/m9.figshare.13122527.v1
Roozenbeek, J., van der Linden, S., & Nygren, T. (2020). Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures.
Amidst growing concerns about widespread vaccine hesitancy, this preprint offers important insights into a possible scalable intervention. Reviewers suggest further investigating the mechanisms at play in the chatbots effectiveness.
Amidst growing concerns about widespread vaccine hesitancy, this preprint offers important insights into a possible scalable intervention. Reviewers suggest further investigating the mechanisms at play in the chatbots effectiveness.