Health

CHAI says guidance on using AI in healthcare is in the works



The AI ​​Health Alliance announced that it will be meeting this month to finalize the consensus-based framework and share recommendations by year-end in a progress update.

WHY IT IMPORTANT

CHAI convened in December to develop consensus and mutual understanding with the goals of taming the momentum to buy machine learning and artificial intelligence products in the healthcare and arm health sectors. IT decision-makers with academic research and vetted guidelines to help them select trusted technologies that provide value.

Through October 14, CHAI is accepting public input on its testability, usability, and safety testing at a conference with industry experts. issues from healthcare and other industries the organization hosts in July.

Previously, CHAI released a sizable article on bias, fairness, and equity based on a two-day convening and accepting public opinion until the end of last month. According to the October 6 progress update, the result will be a framework, Guidelines for the Responsible Use of AI in Healthcare, that intentionally promote the assurance, safety, and security of AI that can recovery capacity.

Dr John Halamka, president of the Mayo Clinic Platform and co-founder of the alliance, said: “The application of AI offers enormous benefits to patient care, but also has the potential to exacerbate disparities. equality in health care”.

The alliance says it is also working to build a set of tools and guides for the patient care journey, from chatbots to patient records, so that populations are not adversely affected by algorithmic bias.

Halamka commented in the update: “Ethical guidelines for using an AI solution cannot be an afterthought.

The progress update comes after this week’s release of the White House Blueprint for the AI ​​Bill of Rights.

CHAI was founded by Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITER, Stanford Medicine, UC Berkeley, UC San Francisco, and others, and is being approved by the Food and Drug Administration and The United States Pharmaceutical and National Institutes of Health, and now the Office of the National Coordinator for Health IT, according to the announcement.

Several institutions are also part of the AI ​​Health Partnership led by the Duke Institute for Health Innovation and are developing open-source curriculum and instruction based on AI cybersecurity best practices. DIHI is currently soliciting grant applications from faculty, staff, students, and interns across Duke University and the Duke University Health System for automation-related innovation projects to improve the efficiency of health care activities.

ONC has focused on the growing space and in its blog series discussed what it might take to get the best out of algorithms to drive innovation, increase competition, and improve care. care for patients and populations.

“What we do know from studies to date is that AI/ML-based prediction technology can positively or negatively impact patient safety, refer or propagate misinformation, and lead to In short, the results have been mixed. But concern – and potential benefit – remains high,” ONC authors Kathryn Marchesini, Jeff Smith and Jordan Everson wrote in a June blog post.

According to Dr. Brian Anderson, co-founder of the alliance and director of digital health at MITER, national need is driving a national framework for medical AI that promotes transparency and trustworthiness.

“The enthusiastic participation of leading academic health systems, technology institutions, and federal observers represents a significant national interest in ensuring that medical AI serves them all. ta,” he said in the CHAI progress update.

TREND TO BIGGER

AI collaboration is also being rolled out to address compromised programs that pose a risk of harm to doctors and patients, and promote discrimination and understanding of the rise of AI software in the care industry. health.

CHAI researchers are also preparing to develop an online curriculum to help educate IT leaders in health, setting standards for how employees should be trained and how AI systems should be trained. should be supported and maintained.

“These systems may embed systemic bias in the delivery of care, providers may market performance claims that differ from real-world performance, and software exists in state of the absence of software best practices,” according to the CHAI launch statement.

But by pre-defining equity and efficiency goals in machine learning and designing systems to achieve those goals, many in the healthcare sector believe that these outcomes can be prevented. biased results and benefits of AI in healthcare and patient care can be realized.

ON PROFILE

“It is inspiring to see the commitment of the White House and the US Department of Health and Human Services to the application of ethical standards to AI,” Halamka said in the update.

“As an alliance, we share many of the same goals, including eliminating bias in health-focused algorithms, and our desire to provide support and expertise,” he said. me as the policy process advances.”

Andrea Fox is the senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS publication.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button