Health

GPT-4 and LLM Like It Promises Healthcare, But Be Careful



Regulators want AI like ChatGPT, but workers love it.

Here, healthcare is at the push-and-pull intersection of GPT. And like other medical technology conferences, at HIMSS23 next week, there will be plenty of GPT traction.

Manny Krakaris, CEO of Augmedix, says big language models are in the company’s DNA for automated medical documentation and data services. Augmedix has focused on note bloat and, as such, operates fine-tuned LLMs in its Ambient Automation Platform for Healthcare Providers and Systems.

We spoke with Krakaris this week about the healthcare need for AI to help doctors streamline processes and fascination with OpenAI’s GPT models.

He says people are trying out ChatGPT because OpenAI has made it so easy to work with.

“Every company today will claim that they use LLM. And it’s really easy to use them. You get the license, you get the API, and you can integrate it into whatever process you use. So they’re really easy to use, but what you do with it will vary a lot from company to company.”

The push and pull of natural language processing

Simply put, having natural language processing capabilities at your fingertips – right in the software programs and platforms that innovators, processors, and other workers use every day. day – can speed things up, the potential is a lot.

Although ChatGPT may have passed the US Medical Licensing Exam, one emergency room physician found its results disturbing in diagnosing patients. The ER doctor said he fears that people are using ChatGPT for medical self-diagnosis instead of seeing a doctor.

The problem is that GPT doesn’t provide perfect accuracy.

“Large language models tend to be hallucinogenic,” says Krakaris.

Or, they might miss something entirely, because the model is based on the information given to it. In the case of ChatGPT, it contains data up to 2021.

The good and the bad

AI like GPT has the potential to help doctors consume and summarize publicly available, scientifically validated data.

“What they are really good at is providing color and context to medical issues,” says Krakaris.

“They are very precise in answering specific suggestions or questions. They are also widely applicable so that they can cover a wide range of topics.”

But the LLM behind the GPT and other models like it is not an informative panacea for healthcare, he said.

“Weakness – the main weakness of LLMs is that they are highly dependent on transcripts. But a good medical note requires input not only from transcripts, but also from electronic health records and from presets. ,” Krakaris said.

For example, “at one point in the conversation, a patient might say, ‘I’m taking my medication regularly,’ and then they might say something completely contradictory. ‘I don’t take my regular medication.’ frequently’, but they may speak of two different symptoms or complaints that are not selected.”

The output of the LLM may also contain information that does not match the medical notes, which adds to the bloat.

“You have a lot of extraneous information showing up in the finished product. It really doesn’t help doctors when they’re looking at patient records for follow-up visits or other doctors are doing things. follow-up visit with that particular patient,” he said. explain.

The future of GPT

From a compliance perspective, using GPT integration for medical notes raises questions about data security, such as where is the data stored?

A few weeks ago, precipice reported that a bug temporarily exposed AI chat history with other users.

But it’s the illusion of newer LLMs that really worries Krakaris.

“You don’t want the model to extrapolate. Because that can lead to really bad results in terms of what shows up and becomes a permanent factor in the patient’s electronic health record. So, you have to put a barrier against hallucinating, if you will. precision or illusion,” he said.

When you’re using LLM, the output will depend on the appropriate prompts or questions and their sequence, he explains.

“The tighter your prompt or query, the narrower the LLM’s openness. So it doesn’t have to search or compile as much information, and thus the more accurate and relevant the output from the LLM will be. a lot to the specific reminder you gave,” he said.

He said Augmedix has developed its AI based on structured data and with hundreds of data organization models based on claims or conditions. These models serve as a suggestion map for generating appropriate responses to compiling medical notes.

Then, content validation is necessary because the AI ​​technology isn’t perfect, Krakaris said.

“You have to understand those limitations and incorporate it into your process, no matter how it happens, to deliver something useful and valuable to your constituents.”

Augmedix at Booth 8531 at HIMSS23.

Andrea Fox is the senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a publication of HIMSS Media.

Ty Vachon will provide more details during HIMSS23 session “ML and AI Forum: Artificial Intelligence 2023 in Healthcare: The Good, the Bad, and the Hope.” It is scheduled for Monday, April 17 at 3pm – 4pm CT at the South Building, 1st Floor, room S100 B.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button