The World Health Organization recently launched sarahits generative AI chatbot tasked with advising the public on how to lead healthier lifestyles.
According to the WHO, Sarah, which stands for Smart AI Resource Assistant for Health, is a “digital health promoter, available 24/7 in eight languages via video or text. She can provide advice for de-stress, eat well and quit tobacco. and electronic cigarettes, be safer on the roads and provide information on other areas of health.”
At first glance, Sarah presents itself as an innovative use of technology for the greater good: an AI-powered assistant capable of offering personalized advice anytime, anywhere, with the potential to help billions.
But upon closer inspection, it could be said that Sarah is both a product of hype and AI FOMO since it is a tool for positive change.
The artificial intelligence used to build Sarah, generative AI, carries an incredible amount of risk. Robots powered by this technology are known to provide Inaccurate, incomplete, biased and generally bad advice..
A recent and infamous case is that of the now defunct chatbot, tessa. Developed for the National Eating Disorders Association, Tessa was intended to replace the organization’s long-standing, human-powered hotline.
But just days before publishing it, Tessa went rogue. The robot began recommending that people with eating disorders restrict their calories, weigh themselves frequently, and set strict weight loss goals. Fortunately, NEDA took Tessa offline and a crisis was averted, but it highlights the pressing need for caution and responsibility in the use of such technologies.
This worrying result emphasizes the unpredictable – and sometimes dangerous – nature of generative AI. It is a sobering example that without strict safeguards, the potential for harm is immense.
Given this warning background, one might expect large public health organizations to proceed with extreme caution. However, this seems not to be the case with the WHO and its chatbot. Despite clearly being aware of the risks associated with generative AI, he has gone public with Sarah.
The OMS disclaimer says the following:
WHO Sarah is a prototype that uses generative AI to convey health messages based on available information. However, the answers may not always be accurate because they are based on patterns and probabilities in the available data. The digital health promoter is not designed to give medical advice. WHO is not responsible for the content of conversations created by Generative AI.
Furthermore, the chat content created by Generative AI in no way represents or comprises the views or beliefs of WHO, and WHO does not guarantee the accuracy of the content of any chat. Check the WHO website for the most accurate information. By using WHO Sarah, you understand and agree that you should not rely on the responses generated as the sole source of truth or objective information, or as a substitute for professional advice.
Simply put, it appears that the WHO is aware of the potential for Sarah to widely spread compelling misinformation, and this disclaimer is their approach to mitigating the risk. Hidden at the bottom of the web page, it essentially communicates: “Here is our new tool. You should not rely on it completely. You’d better visit our website.”
That said, the WHO is safeguarding Sarah by implementing very restricted responses aimed at reducing the risks of misinformation. However, this approach is not foolproof. Recent findings indicate that the bot does not always provide up-to-date information.
Additionally, when safeguards are effective, they can render the chatbot impractical and lacking valuable substance, ultimately diminishing its usefulness as a dynamic informational tool.
So what role does Sarah play? If the WHO explicitly recommends that people visit their website for accurate information, then it seems that Sarah’s deployment is driven more by hype than utility.
Obviously, the WHO is an extremely important organization for promoting public health on a global scale. I am not questioning its immense value. But is this the embodiment of responsible AI? Certainly not! This scenario epitomizes the preference for speed over safety.
It is an approach that should not become the norm for integrating generative AI into business and society. There is a lot at stake.
What happens if a chatbot from a highly respected institution starts spreading misinformation during a future public health emergency, or promotes harmful dietary practices similar to the infamous Tessa chatbot mentioned above?
Given Sarah’s ambitious launch, one might wonder if the organization is following its own advice. In May 2023, the WHO published a statement emphasizing the need to Safe and ethical use of AI.perhaps a guideline that should be revised.
WHO reiterates the importance of applying ethical principles and good governance, as listed in the WHO guidance on ethics and governance of AI for healthby designing, developing and implementing AI for health.
The six basic principles identified by the WHO are: (1) protect autonomy; (2) promote human well-being, human security, and the public interest; (3) ensure transparency, explainability and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusion and equity; (6) promote AI that is responsive and sustainable.
It is clear that the WHO’s own principles for the safe and ethical use of AI should guide its decision-making, but that is not the case when it comes to Sarah. This raises critical questions about her ability to usher in a responsible AI revolution.
If the WHO is using this technology in this way, what are the chances of prudent use of AI in contexts where financial incentives could compete with or eclipse the importance of public health and safety?
The response to this challenge requires responsible leadership. We need leaders who prioritize people and ethical considerations over the hype of technological advancement. Only through responsible leadership can we ensure the use of AI in a way that truly serves the public interest and upholds the imperative to do no harm.
Brian R. Spisak, PhD, is an independent consultant focusing on digital transformation in healthcare. He is also a research associate of the National Preparedness Leadership Initiative at Harvard TH Chan School of Public Health, a faculty member of the American College of Healthcare Executives, and author of the book. Computational leadership.