Skip to main content
SearchLoginLogin or Signup

Review 1: "Information Delivered by a Chatbot Has a Positive Impact on COVID-19 Vaccines Attitudes and Intentions"

Amidst growing concerns about widespread vaccine hesitancy, this preprint offers important insights into a possible scalable intervention. Reviewers suggest further investigating the mechanisms at play in the chatbots effectiveness.

Published onMar 07, 2021
Review 1: "Information Delivered by a Chatbot Has a Positive Impact on COVID-19 Vaccines Attitudes and Intentions"

RR:C19 Evidence Scale rating by reviewer:

  • Reliable. The main study claims are generally justified by its methods and data. The results and conclusions are likely to be similar to the hypothetical ideal study. There are some minor caveats or limitations, but they would/do not change the major claims of the study. The study provides sufficient strength of evidence on its own that its main claims should be considered actionable, with some room for future revision.

***************************************

Review:

The attempt to use chatbots to influence vaccine attitudes is timely, innovative, and has the potential for high conceptual and practical importance.

The work is rigorous and adheres to strong open science standards, including open data, open materials, and a preregistered experiment and analysis plan. The sample size, determined by power analysis, combined with the preregistered analysis plan, supports reliable estimates of the effects in question. We note that the study includes exploratory analysis of data from a follow-up one week later, and these are compatible with the positive effects of interacting with the chatbot persisting over time. Whilst we acknowledge that preregistration and open materials increase our confidence in the reliability of the work, we refrain from specifically commenting on statistical issues for this review and stick to conceptual concerns.

Conceptually, the study is valuable because it positions participants as active, questioning reasoners, rather than as passive recipients of information, and as such speaks to the movement away from the outdated deficit model of science communication (Griffiths et al. 2019). The information delivered by the chatbots is in the form of a dialogue, structured around the likely questions, arguments, and counterarguments participants might have about vaccines and allows participants to choose the order in which they explore the questions and subquestions themselves.

The use of chatbots is also interesting because it represents a novel media form; one which promises to combine engagement, scalability, and feasibility (Altay et al report that the median engagement with the chatbots by participants was 5 minutes).

Finally, this work is significant because the observed effects are substantial and so reflect the potential to effectively enhance vaccine take-up. Altay et al report “37% increase in participants holding positive attitudes, and a 20% decrease in participants saying they would not get vaccinated. Moreover... the participants who held the most negative views changed their opinions the most.” This positive result stands out against a general background that it is often hard to shift attitudes, especially on polarising topics such as vaccines (e.g. Nyhan & Reifler, 2015).

A key aspect of the study that we feel could be improved is the control condition and the extent to which this can tell us which aspect of the chatbot drives the effects.

The most obvious difference is the amount of information; the control text was 93 words long, whereas the chatbot’s total amount of information was 9021 words. For this reason, it’s plausible that the effects were merely due to having more detailed information about vaccinations, and/or spending more time reading the information, rather than the information being delivered through a chatbot per se. The form of the information in the chatbot condition also differed, in that it was explicitly focussed around questions and counter-arguments to vaccine information.

Although participants are assumed to have spent their time with the chatbot via the interactive interface, the chatbot actually included an option to “view all information” which displayed all of the questions and answers at once. This option was always available to participants, and the report does not say how many times this option was chosen by participants.

Importantly, the chatbot condition included a “why should I trust you?” question, which provided information on who funded the researchers, why they were conducting the research, and that independent immunologists and epidemiologists checked all the information. This included the line “this project was funded by public research, we are independent researchers with no connection to the pharmaceutical industry.” The fact that participants in the chatbot condition had access to this statement of researcher independence could also be driving the effect. Individuals who are prone to conspiratorial beliefs are often concerned with powerful groups “pulling the strings” behind the scenes (Douglas & Sutton 2017). This is particularly relevant for anti-vaccination beliefs in which people perceive pharmaceutical companies to be prioritizing profit over human health (Martin & Petrie 2017). Previous work has shown that distrust of science advice can be driven by the perception that scientists do not share the values of the general population and do not have the best interests of the population at heart (Eiser et al. 2009). Showing the ‘human side’ to scientists may help to disentangle the notion that scientists are working for profit, but are in fact working towards a public good. Indeed, a recent example we enjoyed is the anecdotal report of a key vaccine researcher who first downloaded the coronavirus DNA at home in her pajamas.

Finally, because the chatbot is inherently more interactive than the control, in that it involves participants selecting questions versus reading text, it is difficult to pinpoint which aspect of the chatbot is more engaging and whether this added engagement is indeed driving the effect.

In summary, this is a strong contribution to the literature, and as such we recommend it for replication, extension, and further investigation. Options for future replication could include:

(a)   consistent information relating to trust between control and chatbot conditions

(b)   controlling for both time spent on task and amount of information available in each condition

(c)   Use the idea of “yoked controls” from the animal learning literature which would allow

the isolation of engagement and personal choice aspects of the effect.

(d)   confirmation of the findings in different and/or more representative populations.


Altay, S., Hacquin, A. S., Chevallier, C., & Mercier, H. (2021). Information Delivered by a Chatbot Has a Positive Impact on COVID-19 Vaccines Attitudes and Intentions. https://psyarxiv.com/eb2gt 

Douglas, K. M., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories. Current directions in psychological science, 26(6), 538-542.

Eiser, J. R., Stafford, T., Henneberry, J., & Catney, P. (2009). “Trust me, I’m a Scientist (Not a Developer)”: Perceived Expertise and Motives as Predictors of Trust in Assessment of Risk from Contaminated Land. Risk Analysis, 29(2), 288-297

Griffiths, A. G., Modinou, I., Heslop, C., Brand, C., Weatherill, A., Baker, K., ... & Griffiths, D.J. (2019). AccessLab: Workshops to broaden access to scientific research. PLoS biology, 17(5), e3000258.

Martin, L. R., & Petrie, K. J. (2017). Understanding the dimensions of anti-vaccination attitudes: The vaccination attitudes examination (VAX) scale. Annals of Behavioral Medicine, 51(5), 652-660 

Nyhan, B., & Reifler, J. (2015). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine, 33(3), 459-464.

Comments
0
comment
No comments here
Why not start the discussion?