Health

Roadmap for designing more comprehensive health chatbots



Researchers from the University of Westminster; The Kinsey Institute at Indiana University and Positive East reviewed resources from the United Kingdom’s National Health Service and the World Health Organization to develop a community-based approach to increase inclusivity, the ability to adopt and interact with artificial intelligence chatbots.

WHY IS IT IMPORTANT?

Aiming to identify practices that help reduce bias in conversational AI and make their design and implementation more equitable, researchers looked at several frameworks for evaluation and implementation new health care technologies, including the updated Consolidated Framework for Implementation Research in 2022.

When they found that the frameworks lacked guidance to address the unique challenges associated with conversational AI technology – data security and governance, ethical concerns, and the need for data sets diversity training – they conducted a content analysis using the draft conceptual framework and consulted with stakeholders.

Researchers interviewed 33 key stakeholders from a variety of backgrounds, they said, including 10 community members, physicians, developers and mental health nurses with expertise in health reproductive health, sexual health, AI, robotics and clinical safety.

Using a framework approach to analyze qualitative data from interviews to develop their 10-step roadmap, Achieving health equity through conversational AI: A roadmap for designing and deploying universal chatbots representation in health care, published Thursday in PLOS Digital Health,

The report guides the 10 stages of AI chatbot development, starting with ideation and planning, including safety measures, preliminary testing structures, onboarding administration, healthcare testing and maintenance. health and ends with termination.

According to Dr Tomasz Nadarzynski, who led the research at the University of Westminster, an inclusive approach is vital to reduce biases, promote trust and maximize outcomes for marginalized populations. .

“The development of AI tools must go beyond simply ensuring efficiency and safety standards,” he said in a statement.

“Conversational AI should be designed to address specific diseases or conditions that disproportionately affect minority populations due to factors such as age, ethnicity, religion, gender, gender identity, sexual orientation, socioeconomic status or disability”.

Stakeholders emphasize the importance of identifying public health disparities that conversational AI can help reduce. They say that at the outset, as part of the initial needs assessment – ​​done before the tools are created.

According to the researchers, “Designers should identify and frame the behavioral and health outcomes that conversational AI is aiming to influence or change.”

Stakeholders also said that conversational AI chatbots should be integrated into healthcare environments, designed with diverse input from the communities they are intended to serve, and made visible. display clearly. They must ensure accuracy with protected data security and reliability, and must be tested by diverse patient groups and communities.

According to the study, health AI chatbots also need to be regularly updated with the latest clinical, medical and technical advances, monitored – incorporating user feedback – and evaluated for effectiveness. their impact on health care services and employee workloads.

Stakeholders also said using chatbots to expand access to healthcare must be done within existing care pathways and “is not designed to operate as a standalone service ” and may require adjustments to suit local needs.

BIGGER TREND

Money-saving AI chatbots in the healthcare sector are predicted to be an information gathering effort where easier tasks have been shifted to chatbots while the technology is advanced enough to handle the tasks. more complicated task.

Since ChatGPT brought conversational AI to every sector in late 2022, healthcare IT developers have begun testing it to gather information, improve communications, and shorten time perform administrative tasks.

Last year, UNC Health tested a synthetic AI chatbot tool internally with a small group of clinicians and administrators to enable staff to spend more time with patients and less time in front of the computer. Many other supplier organizations now use synthetic AI in their operations.

AI is being used in patient scheduling and post-discharge care to help reduce readmission rates and reduce societal health inequities.

However, according to healthcare industry leaders, trust is critical for AI chatbots in healthcare and they must be developed carefully.

“At the end of the day, you have to have humans somewhere,” said Kathleen Mazza, a clinical informatics consultant at Northwell Health.

“You don’t sell shoes to people online. This is health care.”

ON PROFILE

“We have a responsibility to harness the power of ‘AI forever’ and direct it toward solving pressing social challenges like health inequities,” Nadarzynski said in a statement.

“To do this, we need a paradigm shift in how AI is created – one that emphasizes co-production with diverse communities across the entire lifecycle, from design to deployment.”

Andrea Fox is a senior editor at Healthcare IT News.
Email: [email protected]

Healthcare IT News is a publication of HIMSS Media.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button