The World Health Organisation has expressed concerns about the use of Artificial Intelligence (AI) in healthcare, particularly the potential generation of biased data that could misguide treatment decisions.
In a statement released on Tuesday, WHO noted that AI-generated large language model tools (LLMs), including platforms like ChatGPT, Bard, and Bert, which simulate human understanding and communication, have shown promise in supporting health needs.
“It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity”, the statement reads in part.
While acknowledging the potential benefits of using LLMs to support healthcare professionals, patients, researchers, and scientists, the WHO called for consistent caution and adherence to key values such as transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.
Health experts warned against the hasty adoption of untested Artificial Intelligence systems, emphasizing the potential for errors, harm to patients, and erosion of trust in AI if not properly implemented. The concern lies in the possibility of biased data used to train AI models, which could lead to misleading or inaccurate information.
These models, despite appearing authoritative and plausible, may contain serious errors, particularly in health-related responses. Furthermore, issues related to consent and data protection arise, as LLMs may be trained on data without prior consent or fail to adequately safeguard sensitive health data shared by users. The WHO recommended that policymakers prioritize patient safety and protection while technology firms work towards commercializing LLMs.
They stressed the need to address these concerns and gather clear evidence of the benefits before the widespread integration of LLMs in routine healthcare practices, whether by individuals.