Health

Step by step growing trust in healthcare AI



A new Chilmark Research report by Dr. Jody Ranck, the company’s senior analyst, explores how modern processes for mitigating risk and bias in artificial intelligence can be used to develop more reliable machine learning tools for healthcare.

WHY IT IMPORTANT
As the use of artificial intelligence in healthcare grows, some providers are skeptical about how much they should trust machine learning models deployed in medical environments. AI products and services have the potential to determine who gets what form of medical care and when – so the stakes will be high when algorithms are deployed, like “AI Reporting” and Trust in Healthcare” 2022 by Chilmark, published September 13, explains.

Developments in AI and enterprise-grade enhancement have involved population health research, clinical practice, emergency room management, health system operations, revenue cycle management. , supply chain and more.

The efficiency and cost savings that AI can help organizations realize is driving that array of use cases, along with deeper insights into clinical patterns that machine learning can show.

But there are also many examples of algorithmic biases towards race, gender, and other variables that have raised concerns about how AI is being implemented in healthcare settings and its downstream impact. of possible “black box” models.

The Chilmark report outlines hundreds of first-year COVID-19 pandemic algorithms that analyze X-rays and CT scans to aid in diagnosis that cannot be reproduced in scientific research. According to research, questionable science-based clinical decision support tools are still being used.

Along with the tech industry, the report criticizes the U.S. Food and Drug Administration for falling behind in addressing the challenges the rapidly growing industry poses to the healthcare sector.

An industry consortium is proposed to address several key areas of AI at the heart of patient safety and build “an ecosystem of transparent, health equity-driven models.” and has been validated with the potential for beneficial social impact.”

Available by subscription or purchase, the report outlines the steps to take to ensure good data science – including how to build diverse teams capable of addressing the complexities of care bias. AI healthcare, based on government research and consulting agencies.

TREND TO BIGGER WOMAN
Some in the medical and scientific communities have pushed back against AI-based studies without sharing enough details about their codes and how they are tested, according to an article on the cloning crisis. AI on MIT Technology Review.

That same year, Princeton University researchers published a review of scientific papers containing pitfalls. Out of 71 medical-related articles, 27 contained AI models with critical errors.

Several studies show that the trade-off between equity and efficiency in AI can be eliminated with deliberate thoughtfulness during development – by defining equity goals first during machine learning. .

According to Joachim Roski, principal of Booz Allen Hamilton’s health business, rushed AI development or implementation has led to over-performance.

Roski talked to IT news about healthcare prior to the educational session HIMSS22 addressed the need for paradigm-shifting in healthcare AI, in which he presented salient AI failures and key design principles for evidence-based AI development.

A greater focus on evidence-based AI development or implementation requires effective collaboration between the public and private sectors, which will lead to greater accountability for AI developers. , implementers, healthcare organizations, and others have always relied on evidence-based AI development or implementation activities,” said Roski.

ON PROFILE
Ranck, author of the Chilmark report, hosted a podcast interview in April with Dr. Tania Martin-Mercado, digital advisor for healthcare and life sciences at Microsoft, about fighting back bias in AI. (Read our interview with Martin-Mercado here.)

Building on her findings from studying the race-adjusted algorithms currently in use, she said increasing developer accountability and accountability could ultimately reduce damage to patient.

“If you don’t empower [data] people who are creating tools to protect patients, protect populations, get people involved in clinical studies, if you don’t empower these people to do [the] change and give them control over their actions, then [just] Martin-Mercado said.

Andrea Fox is the senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS publication.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button