Health

The National Academy of Medicine drafts a code of conduct for AI in health care



The National Academy of Medicine has issued an overview, prescriptive principles, and commitments that inform the transformation of accurate, safe, trustworthy, and ethical AI in health care and medicine. studying biomedicine is achievable.

Based on the Leadership Consortium’s Learning Health Systems Core Principles, an initiative the academy has led since 2006, the organization says its new draft framework promotes behavioral Responsible for the development, use and ongoing evaluation of AI.

Its core tenets require comprehensive cooperation, continuous safety assessment, efficiency and environmental protection.

WHY IS IT IMPORTANT?

According to an announcement, the full commentary, including an overview and “Draft Code of Conduct Framework: Code Principles and Code Commitments,” was developed through the Code of Conduct initiative Academy’s AI conduct, under the direction of relevant experts.

The proposed prescriptive principles and prescriptive commitments “reflect simple guidelines for guiding and evaluating behavior in a complex system, and provide a starting point for informed decision-making.” real-time and detailed implementation plans to promote responsible use of AI,” said the National Academy of Medicine.

The academy’s AI Code of Conduct initiative launched in January 2023 has engaged multiple stakeholders – listed in the endorsement section – in co-creating the framework new draft.

Victor Dzau, president of the academy, said in a statement: “The promise of AI technology to transform health and healthcare is enormous, but there is concern that improper use could cause harm”.

“There is an urgent need to establish principles, guidelines and safeguards for the use of AI in healthcare,” he added.

Beginning with an extensive review of the existing literature around AI guidelines, frameworks and principles – some 60 publications – the editors named three areas of inconsistency: inclusive collaboration, Continuous and effective safety assessment or environmental protection.

“These issues are of particular importance because they highlight the need for clear, intentional action between and among the various stakeholders comprising the interstitial or connective tissues that help unify the environment,” they write. unified system to pursue a common vision.

Their commentary also identifies additional risks of using AI in healthcare, including misdiagnosis, overuse of resources, privacy violations, and workforce displacement or “lack of attention due to too much reliance on AI”.

The 10 prescriptive principles and six prescriptive commitments in the framework ensure that AI best practices maximize human health while minimizing potential risks, the academy said, noting that they serve as “ultimate guides” to support large-scale organizational improvement.

“Health and healthcare organizations that align their vision and operations with these 10 principles will help drive alignment, performance and continuous improvement across the system,” the academy said. which is very important when facing today’s challenges and opportunities.”

Michael McGinnis, chief executive officer of the National Academy of Medicine, added: “This new framework puts us on the path to using AI safely, effectively and ethically, for its transformative potential. of it is put to use in medicine and health”.

Peter Lee, president of Microsoft Research and a member of the academy’s steering committee, noted that the academy invites public comment (through May 1) to refine the framework and accelerate AI integration in health care.

“Advances such as these are critical in overcoming the barriers we face in American health care today, ensuring a healthier tomorrow for all,” Lee said. everybody”.

In addition to input from stakeholders, the academy said it will convene key contributors into working groups and test the framework in case studies. The Academy will also consult with individuals, patient advocates, health systems, product development partners and key stakeholders – including government agencies – before launching final code of conduct for AI in healthcare.

BIGGER TREND

Last year, the AI ​​Healthcare Alliance developed a blueprint for AI that takes a patient-centered approach to address barriers to trust and other AI challenges to help inform Academy AI Code of Conduct.

It builds on the White House AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

“Transparency and trust in AI tools will influence medical decisions that are paramount for patients and clinicians.”

While most healthcare leaders agree that trust is a key driver for improving healthcare delivery and patient outcomes with AI, how can these systems Healthcare should put ethical AI into practice remains a terrain rife with unanswered questions.

“We don’t have a scalable plan as a country yet for how we’re going to support critical access hospitals or [federally qualified health centers] or health systems that have less resources, don’t have the ability to set up these governance committees or these very fancy dashboards that will track model drift and performance,” he said Healthcare IT news last month.

ON PROFILE

“The draft new code of conduct framework is an important step toward creating a path toward safely reaping the benefits of health outcomes,” Dzau said in the National Academy of Medicine announcement. Improved health and medical breakthroughs are possible through the responsible use of AI.”

Andrea Fox is a senior editor at Healthcare IT News.
Email: [email protected]

Healthcare IT News is a publication of HIMSS Media.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button