Health

WHO urges caution with AI implementation in healthcare



As artificial intelligence deployments increase in speed and scope in healthcare organizations globally, the World Health Organization this week issued a call for vigilance and consideration. when it comes to how to use artificial intelligence and machine learning models.

WHY IT IMPORTANT
WHO calls for “prudence in its implementation” of how AI is used in medical and other healthcare settings – especially rapidly growing large language modeling tools like ChatGPT.

To “protect and promote human health, human safety and autonomy” – as well as preserve public health – officials said “it is imperative that risks are checked Be careful when using LLM to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequality.”

WHO acknowledges that the recent “widespread popularity and experimental use” of tools such as ChatGPT, Bard, Bert, and others are “generating considerable excitement around around the potential to support people’s health needs.”

While experts at the UN agency said they were also excited about the “appropriate use” of those leading algorithms, they were also concerned that “caution is often exercised towards any new technology is not implemented consistently with the LLM.”

WHO officials worry that the “hasty adoption of untested systems” not only harms patients due to medical errors and inaccurate information, but also “erodes trust in AI and thereby undermine (or delay) the potential long-term benefits” of its use.

Specifically, the statement cited concerns about the values ​​of “transparency, inclusion, public participation, expert oversight and rigorous review”.

WHO wants those imperatives to be a top priority as AI is deployed, and calls for “measurement of clear evidence of benefit” before widespread and routine use of LLMs and other AI models in the supply chain. provide health care services.

TREND TO BIGGER WOMAN
In just a few months, ChatGPT and next-generation AI have made it clear that a new era is upon us when it comes to healthcare and decision-making processes. LLM models and other machine learning tools are poised to impact patient engagement and communication, inform hospital ADT options, making waves in the healthcare workforce health and fundamentally change the way we care, with so many unknowns and so many risks.

There is clearly a need for AI oversight in healthcare, and more generally, a thoughtful approach to how – and why – those tools are put to use.

At HIMSS23 last month, leaders from the World Health Organization and other health ministries around the world spoke about the need to pursue digital health strategies that take patient access to health, safety and equity as their main stars.

ON PROFILE
“WHO reiterates the importance of applying appropriate ethical and governance principles, as outlined in WHO guidelines on the ethics and governance of AI for health, when designing, developing and deploying AI for health,” World Health Organization officials said this week. declare.

“The six core principles identified by WHO are: (1) safeguarding autonomy; (2) promoting human health, human safety and the public interest; (3) ensuring transparency, explainability and intelligibility; (4) promote accountability and accountability; (5) ensure inclusiveness and fairness; (6) promote responsive and sustainable AI.”

Mike Miliard is executive editor of Healthcare IT News
Email the writer: [email protected]

Healthcare IT News is a publication of HIMSS.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button