Health

Dana-Farber Cancer Institute Shares Lessons Learned on Safe Use of LLM



The renowned Dana-Farber Cancer Institute has built a secure and private discovery environment to evaluate, test, and deploy large language models for non-clinical applications such as basic and clinical research and operations.

The provider organization overcame governance, ethical, regulatory, and technical challenges and deployed a secure API to enable developers to embed AI into their software applications. And the organization trained its workforce on Use LLMs properly and safely, retrain and upskill as needed, and work to increase adoption.

Renato Umeton is the executive director of AI and data science services at Dana-Farber Cancer Institute. He holds a PhD in mathematics and computer science. Healthcare IT News spoke with Umeton to talk about his AI work and preview his case study session on the topic at the HIMSS Healthcare AI Forum, scheduled for September 5-6 in Boston. The session will focus on mitigating the risks of LLM in healthcare.

Q. What are some of the biggest opportunities and challenges facing large language models in healthcare today?

ONE. The focus of the session was on the private, secure, and HIPAA-compliant deployment of large language models in healthcare, particularly for the Dana-Farber Cancer Institute workforce. The main focus of the session was to discuss the challenges and lessons learned in integrating these advanced AI tools into research and operational tasks, while explicitly excluding direct clinical care (e.g., treatment, diagnosis, driving clinical management, or informing it).

This is highly relevant in today’s healthcare landscape as AI is permeating more and more healthcare software products and everyone – from clinicians to patients and staff – can benefit from understanding how to harness its potential safely and effectively.

In the short term, we are looking at use cases that improve efficiency. In the long term, we hope that better data and AI will lead to improved operations and patient outcomes.

Our journey to bring GPT-4 to life faces significant ethical, legal, regulatory, and technical challenges.

By sharing our experience and the framework we have developed for implementing AI, we hope to provide insights for other healthcare organizations considering similar implementations. This is especially relevant as the industry grapples with the twin imperatives of innovation and patient safety, making it critical to establish strong governance and guidelines for the use of AI.

Q. Can you share any examples of work being done at your organization?

ONE. The key technology discussed in our session was GPT4DFCI, a private, secure, HIPAA-compliant generative AI engine based on GPT-4 models. You can think of GPT-4o as the central layer of this application. The next layers are to support AI models that analyze all the data coming in and out of the models to filter out dangerous content like malicious language or copyrighted software code.

Outside of that is a layer that records everything our users do with this technology and allows for auditing. Finally, the outermost layer is The simple ChatGPT-like user interface has links to training materials and a user support system, as well as a dedicated Wiki page where users can read more.

The technology is being used to support a variety of non-clinical tasks, such as extracting and searching for information in notes, reports, and other documents, as well as automating repetitive tasks and streamlining administrative documentation.

Q. What do you hope seminar attendees will learn and be able to apply back to their service organizations?

ONE. First, we hope attendees will understand the importance of establishing a comprehensive AI governance framework for the careful deployment of AI technologies in healthcare. This includes establishing a multidisciplinary governance committee, such as our AI Governance Committee, to oversee deployment, address ethical concerns, and ensure compliance with evolving regulations.

By engaging multiple stakeholders, including legal, clinical, research, technical, and bioethical experts, as well as patients, organizations can create policies that balance innovation with patient safety and data privacy.

Second, we aim to see attendees recognize the value of implementing AI technology in a phased and controlled manner. Our experience with GPT4DFCI highlights the potential benefits of limiting clinical use of AI to IRB-approved clinical trials and institute-approved pilot programs.

This approach allows for iterative improvement based on lessons learned from controlled studies and helps identify and address potential problems early on. For non-clinical use cases, there is significant value in providing comprehensive training and support for users to learn from each other to use the technology effectively and responsibly.

By adopting a cautious and phased AI strategy, we believe other organizations can maximize the benefits of AI while minimizing the associated risks.

Attend this session at the HIMSS AI Forum in Healthcare scheduled for September 5-6 in Boston. Learn more and sign up.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a publication of HIMSS Media.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button