Vaccine Hesitancy Chatbot

Vaccine Hesitancy Chatbot

Technological robustness and efficacy
  • The vaccine hesitancy chatbots are a possible solution to a clear problem, though their effectiveness has to be researched more.
  • The evaluated chatbot seems to work well and has a mechanism in place in case it malfunctions, making it more robust.
  • However, there is no indication whether the risk to adverse effects is being researched, or whether any measures to secure the system are in place.
  • The fact that the evaluated chatbot is very strongly regulated by humans, who maintain its working and answers and are able to intervene, largely makes up for that.
Impact on citizens and society
  • Even though no complete human rights impact assessment was undertaken, the deployment of the evaluated chatbot does not seem to negatively impact human rights.
  • It is claimed that using the evaluated chatbot is 100% anonymous. However, since personal data is still being processed and the application is accessible for European users, compliance with the GDPR is necessary. This should be improved regarding obtaining valid consent.
  • The system is fair, not intrusive, accessible to the target group and it is clear where the answers it gives are based on.
  • However, there is a slight risk for human autonomy as it tries to push the user in a certain direction, trying to change their beliefs. Still, this is being done by providing them well-researched, government-approved information.
Governance and Accountability
  • The deployment of the evaluated chatbot is necessary as the vaccine acceptance rates are still not high enough. Also, deploying the chatbot is relatively low risk and therefore proportional because of what it can gain.
  • The creators of the evaluated chatbot are transparent about its design, used data, training and maintaining processes, and overall workings of the application.
  • Using the chatbot is 100% voluntary and anonymous.
  • The application is not limited to use for other domains or purposes, nor is there any indication of a Sunset clause. However, this is not necessarily a bad thing as chatbots like this can support people in making informed decisions about other subjects too (for example blood donation or quitting smoking).
  • Since there is no indication of stakeholder involvement, it is unclear whether people actually want an application like this, will use this or trust answers given by a chatbot.
Acceptable trade-offs in times of crisis

Vira the informational chatbot

It has been argued that vaccines are our key in the fight against the pandemic. However, this only works when enough people are willing to receive it. The fast development of the vaccines has made many people doubtful of taking it.

To combat vaccine hesitancy and misinformation researchers of the John Hopkins University and IBM have developed an AI-based chatbot. The chatbot, named Vira, was inspired by a Japanese version that managed to increase the vaccine acceptance rate from 59% to 80%. Vira is supposed to help young people make more informed-decisions about taking or waiving a vaccination. It understands the main concerns surrounding vaccines and responds in a publicly acceptable, pro-vaccine way with answers written by experts on vaccinations. The application continually learns from new conversations, detecting emerging concerns and learning new ways in which existing ones can be expressed.

The impact of the deployment of this chatbot seems innocent at first. However, this might seem deceiving. To discover the chatbots actual impact, we have assessed its use against the 24 requirements of the “Framework for Responsible AI in times of Corona”, divided into 3 categories: (i) Society: (ii) Technology; and (iii) Governance.

Each requirement is given a score from ‘0’ to ‘3’:

0 = unknown

1 = no compliance

2 = partial compliance

3 = full compliance

USE CASE

Society

  • Vira has no identified undesirable impact on societal or human behaviour
  • However, as this system continually tries to identify the concerns people have and its answers are based on this information, a risk of manipulation or nudging becomes present. Vira is meant to be neutral and informative, but if its answers try to push its users in a certain direction by adapting to their concerns it poses a risk for human autonomy.
  • The Johns Hopkins Bloomberg School of Public Health provides an extensive privacy statement that decribes how they protect the privacy of all visitors of their website.
  • They also include some health safety measure: when using Vira, they give multiple disclaimers about Vira being solely informative and they advice you to consult your doctor when seeking for medical advice.
  • However, there is no sign that a complete Human Rights Impact Assessment is undertaken.
  • Vira provides correct information accessible to a wide public.
  • Further, Vira poses no risk for systemic failure or disruption.
  • The application also does not posa a risk to democracy or possible disruption of our social system.
  • Even though this application is an US product aimed at young US citizens, it is stated that the website can be used by everyone doubting vaccines, including Europeans.
  • For that reason, Vira needs to comply with the GDPR.
  • In its current form, users consent to their data being collected, used (also for future purposes) and even transferred to third parties by accepting the terms and conditions of use.
  • Those terms and conditions are accepted solely by using the chatbot. Under the GDPR this does not count as valid consent.
  • No other laws are relevant
  • A user has the agency to decide for themselves whether or not they want to use Vira and whether or not they want to adapt their behaviour based on the information given by the system.
  • In a way, the system can be used to strenghten the user’s agency as it can result in more informed decision making.
  • Although Vira tries to be as neutral as possible and all the information it provides is well-researched and improved by the US government, there is a risk that the answers given are meant to make you support a specific view. This can be seen as manipulation or nudging, which negatively impacts human agency.
  • The working of Vira is well explained on a “FAQ” page.
  • The dataset is also openly available. Users have access to all the possible answers Vira can give and the key points the system looks for.
  • Links to research on the technology used, with explanations what it does and how it works are provided too.
  • Further, the process of creating and maintaining the system is explained, which gives you an explanation for the systems outcomes.
  • The application shows the same information to everyone, despite how someone might formulate their questions.
  • There are no measures necessary to avoid bias with the working of this system.
  • Vira can only help people with access to the internet, that have the additional skills to communicate via chat.
  • Both digital literacy and language skills of users should be of a certain level.
  • These requirements exclude a small group of people within the US (about 7%) that cannot or do not use the internet.

Technology

  • The problem definition becomes somewhat clear from the purpose of the application stated on the website: “Vira was launced to support informed decision-making for young people by providing them with 24/7 access to reliable information in a format that is engaging, empathic, and accessible.”
  • This indicates that the problem consists of young people not having access to correct information regarding vaccines.
  • It is likely that the overarching problem they try to tackle is the low vaccine-acceptance rate. However, this is not made clear anywhere.
  • This solution is likely the least invasive AI solution that helps people gain information on vaccines. There might be more effective solutions, but they might also cause more harm to fundamental rights.
  • However, there is no indication that other solutions have been researched.
  • A similar application deployed in Japan proved to increase the vaccine acceptance rate from 59% to 80%.
  • Even though we can say Vira has the potential to contribute to solving vaccine hesitancy, there is no proof of effectiveness for the US-version.
  • Unknown whether potential adverse effects were identified
  • On first sight, this application feels risk free. However, there is a possible risk for human autonomy and security of personal data.
  • Unknown whether security measures were taken.
  • Even though they claim that no personal data is being processed, there remains a risk of people messing with the algorithm or dataset, leading to unwanted, possibly harmful outcomes.
  • The chatbot’s knowledge base covers 150 distinct concerns about vaccines, referred to as “Key Points”. The bot can recognize many thousands of ways people can express these key points and has about 300 different answers matched to the key points. Because of the thousands of ways people can pose a question the chatbot seems to almost always find the right answer for someone’s concern.
  • Accuracy problems would occur when someone poses a concern that is not in the key point dataset or phrases it in a way that is unknown to Vira.
  • The system includes the option for you to express dissatisfaction with a certain answer. With this feedback the chatbot learns: it either finds a new concern that the researchers will then answer and add to the dataset or it finds a new way to express a concern that the chatbot already knows. Both these options will increase the systems accuracy and robustness.
  • Information on results of accuracy tests were absent.
  • This specific application is only designed an trained for its current task.
  • However, with new data concerning different subjects, the system can be retrained with the same method and used with similar success in other situations.
  • This specific system might not be generalizable, but the method and concept behind it is.
  • All the answers Vira gives are written by human experts in the medical field. It is also stated that the team working on Vira continually updates the information it can give, according to the latest data and guidelines.
  • Vira does not have the ability to generate text itself.
  • Humans can easily intervene with the working of Vira by deleting, adding or adapting outcomes it can give.

Governance

  • There is no additional legal basis necessary to deploy a chatbot like this.
  • It is likely that vaccines will help us fight the pandemic. The deployment of Vira and similar chatbots are therefore necessary, since the vaccine acceptance rates are still not high enough in many countries.
  • The deployment of Vira can be considered proportional since using the application is voluntary and does not pose a big risk for safety and other fundamental rights.
  • A domain or purpose limitation are absent.
  • Other possible uses for the application, such as informing the public about smoking or voting, were noted.
  • Clear and easy to find communication about the workings of the system and its purpose.
  • Method of creating and maintaining the system are also explained.
  • Data set is openly available (including answers, possible questions and key points the system looks for).
  • Lastly, links are provided to research on the technology that the application is based on.
  • Use of the chatbot is voluntary
  • There is no indication of a Sunset process.
  • However, in the case of this application a Sunset clause feels less relevant; there is no end-date necessary for taking it off-line as Vira can keep informing the public about vaccines, even when the pandemic has blown over.
  • Also, since users need to actively go onto a website to use the AI system, it is likely that the impact of the application will end naturally when its service is no longer relevant.
  • Even though the concerns people have regarding vaccines was researched through social media analysis, there is no indication of actual involvement from the public or any other authorities.
  • It is clear that the application is owned by Johns Hopkins Bloomberg School of Public Health.
  • They also clearly state the workings and use of the application.