Health

How AI and FHIR can help reduce sepsis mortality rates



While 80-85% of sepsis cases present within the first 48 hours of admission, they have lower mortality (5-10%) compared with 15-20% of cases that present later and have higher mortality (15-30%).

To better – and earlier – identify sepsis cases not present on admission, at a large safety-net hospital, an end-to-end early sepsis prediction and response workflow was created in the inpatient setting. First, a machine learning model was built to predict the risk of a patient becoming septic in real time.

Next, the model was baked into clinical workflows through FHIR APIs to make it actionable at the point of care. The model accesses the EHR every 15 minutes and alerts care providers when the risk exceeds a certain threshold, which can be tailored to local populations.

Finally, an EHR-integrated decision support app, or ISLET, was added to enable clinicians to easily view and understand model output to improve actionability. Prediction, alerting, visualizing the root causes and acting on the case completes the workflow. This full workflow has been running for thousands of patients every 15 minutes in the last year.

Yusuf Tamer is principal data and applied scientist at the Parkland Center for Clinical Innovation. He will tell this story in great detail at HIMSS24 in an educational session titled, “Closing the Loop in Sepsis Prediction With ML and ISLET Visualization.”

We interviewed Tamer to get a sneak preview of the session prior to the big show next month in Orlando.

Q. What is the overarching focus of your session? Why is it important to health IT leaders at hospitals and health systems today?

A. Sepsis is a severe condition triggered by an infection that can lead to multiple organ failure. It’s a medical emergency that requires swift identification and treatment. The primary focus of my session is to discuss the role of artificial intelligence in the early prediction of sepsis within hospital settings.

AI systems in healthcare are increasingly complementing healthcare providers by offering them reasons for suspicion. These suspicions are acted upon when the providers trust the reasons given to them. This trust is built on two key pillars: timeliness and explainability.

Timeliness is crucial in sepsis detection. The sooner sepsis is identified, the better the patient’s chances of recovery. If an AI system identifies sepsis and alerts the provider after they have already initiated treatment, it diminishes the system’s value. It could disrupt the clinical workflow and erode trust in the AI system. Therefore, the AI system must be designed to provide timely alerts that can genuinely assist in the treatment process.

Explainability is another critical aspect. In a patient care setting, every action taken by a provider is subject to auditing. While AI systems are not the final decision makers, they can significantly influence decision making.

Therefore, the decisions made by AI systems or machine learning models must be explainable. This transparency is crucial for auditing purposes and ensures accountability in AI-assisted healthcare.

Furthermore, the explainability of AI systems is not just important for auditing, but also for building trust with healthcare providers. If the AI system can provide clear, understandable reasons for its predictions, healthcare providers are more likely to trust and act on its recommendations.

The session will provide valuable insights into how AI can enhance the early prediction of sepsis, emphasizing the importance of timeliness and explainability in building trust and improving patient outcomes.

This topic is of utmost importance to health IT leaders as it touches on the intersection of technology and patient care, highlighting how AI can be leveraged to improve healthcare delivery.

Q. What is one main learning you would like session attendees to walk away with? And how is this vital to healthcare today?

A. The primary takeaway I want attendees to have from this session is that machine learning models do not have to be “black boxes.” While performance is a critical factor for these models, an explainable model that is trusted by providers will be used more if it has comparable performance. This is a crucial understanding in the context of healthcare and health IT today.

Machine learning models are often perceived as complex and opaque, making it difficult for healthcare providers to trust and use them. However, it’s important to understand these models can be designed to be transparent and explainable.

A model that provides clear, understandable reasons for its predictions can build trust among healthcare providers, leading to increased utilization even if its performance is only comparable to less transparent models.

Moreover, visual representation of data can significantly enhance the value provided by these models. A picture is indeed worth a thousand words. A graphic that illustrates how a patient’s vitals or lab values have changed over time can provide more value than a simple numeric output.

It can help healthcare providers better understand the patient’s condition and the model’s predictions, leading to more informed decision making.

In this session, we will discuss how we created a report page with graphics about our machine learning model and how we integrated it into the EHR. This integration allows healthcare providers to access and understand the model’s predictions directly within the patient’s EHR, enhancing the usability of the model.

Furthermore, we will explore how Fast Healthcare Interoperability Resources APIs are opening up new, fast and interactive ways for visualizing machine learning insights. These APIs allow for seamless integration of machine learning models with existing healthcare IT systems, enabling real-time, interactive visualization of model predictions.

The session aims to demystify machine learning models in healthcare and highlight the importance of explainability and visualization in building trust and enhancing the usability of these models. This understanding is vital for health IT leaders as they navigate the rapidly evolving landscape of AI in healthcare.

Q. What is one more learning you would like session attendees to walk away with? And how is this vital to healthcare and/or health IT today?

A. It is the importance of continuous feedback from active users, in this case, healthcare providers, in enhancing the value of AI systems in healthcare. This is a crucial aspect of healthcare and health IT today.

AI systems are not standalone entities; they are part of a larger ecosystem that includes healthcare providers, patients and other stakeholders. Therefore, the development and refinement of these systems should be a collaborative process.

When healthcare providers are included in the development of machine learning solutions, they gain a better understanding of how these systems work. This understanding fosters trust, which in turn enhances their usage of the tool in their decision-making process.

Moreover, healthcare providers often face alert fatigue due to the high number of alerts they receive from various systems. This can lead to important alerts being overlooked, potentially impacting patient care.

Therefore, it’s crucial to get providers’ opinions on what to alert and when to wait before alerting. This feedback can help in designing more effective alert systems, alleviating alert fatigue, and ultimately improving patient care.

Furthermore, continuous feedback from healthcare providers can help in identifying areas of improvement for the AI system. Providers, being the end users of these systems, can provide valuable insights into the system’s performance, usability and relevance in the clinical setting. This feedback can be used to refine the system, making it more effective and user-friendly.

The session aims to highlight the importance of user feedback in the development and refinement of AI systems in healthcare. This understanding is vital for health IT leaders as they strive to integrate AI in healthcare in a way that is effective, user-friendly and beneficial to patient care.

This collaborative approach to AI development not only enhances the value of the AI system but also fosters trust and understanding among its users.

The session, ” Closing the Loop in Sepsis Prediction With ML and ISLET Visualization,” is scheduled for March 12, Noon-1 p.m. in room W304A at HIMSS24 in Orlando. Learn more and register.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button