Health

Healthcare must establish barriers around AI to ensure transparency and safety



According to one study, 4 out of 10 patients perceive a doctor’s implicit bias. MITER-Harris survey of patient experience. In addition to patients being more sensitive to provider bias, the use of AI tools and machine learning models has also been shown to favor racial bias.

On a related note, a Recent research shows that 60% of Americans would be uncomfortable with providers that rely on AI for their healthcare. But between provider shortages, shrinking reimbursements, and growing patient demand, over time, providers may have no choice but to turn to public health services. AI tools.

Healthcare IT News sat down with Jean-Claude Saghbini, an AI specialist and chief technology officer at Lumeris, a value-based care technology and services company, to discuss the concerns surrounding AI in care. health – and what healthcare organization’s IT leaders and clinicians can do about them.

Q. How can healthcare CIOs and other health IT leaders combat implicit bias in artificial intelligence as the popularity of AI systems explodes? explode?

ONE. When talking about AI, we often use words like “training” and “machine learning”. This is because AI models are mainly trained on human-generated data and thus they learn our human biases. These biases present a significant challenge in AI, and they are particularly relevant in healthcare, where the health of patients is at stake and their presence will continue to spread grievances. degree in health care.

To combat this, medical IT leaders need to better understand the AI ​​models embedded in the solutions they are adopting. Perhaps even more importantly, before they deploy any new AI technology, leaders must be sure that the vendors delivering these solutions appreciate the harm that human bias can do. AI can bring and have developed their models and tools suitable to avoid it.

This can range from ensuring upstream training data is unbiased and diverse, or applying transformation methods to the outputs to compensate for unexplained biases in the data. train.

At Lumeris, for example, we are taking a multi-pronged approach to combat bias in AI. First, we are actively studying and adapting to the health disparities represented in the underlying data as part of our commitment to equity and equity in healthcare. This approach involves analyzing healthcare training data for demographic samples and adjusting our models to ensure they don’t unfairly impact any particular population group. any.

Second, we are training our models on more diverse data sets to ensure they are representative of the population they serve. This includes the use of more comprehensive data sets that represent a broader range of patient demographics, health status, and facility of care.

Finally, we are including non-traditional healthcare features in our models, such as social determinants of health data, thus ensuring predictive and scoring models. risks taking into account the individual socioeconomic conditions of the patient. For example, two patients with very similar clinical presentations could be directed towards different interventions for optimal outcomes when we combined SDOH data in the AI ​​models.

We’re also taking a transparent approach to the development and deployment of our AI models, while incorporating feedback from users and applying human oversight to ensure the topics are covered. Our AI output is consistent with clinical best practices.

Combating implicit bias in AI requires a holistic approach that considers the entire development lifecycle of AI and cannot be an afterthought. This is the key to really promoting equity and equality in healthcare AI.

Q. How do health systems strike a balance between patients who don’t want their doctors to depend on AI and overwhelmed doctors looking for help from automation?

ONE. Let us first consider two facts. Fact #1 is that between waking up in the morning and meeting each other for an office visit, it is very likely that both the patient and the doctor have used AI multiple times in situations like asking questions. Alexa for the weather, relying on a Nest device to control the temperature, Google maps for optimal directions, etc. AI has been contributing to many aspects of our lives and has become inevitable from.

Fact #2 is that we are heading towards a shortage 10 million clinicians worldwide by 2030, according to the World Health Organization. Using AI to expand the capabilities of clinicians and reduce the disastrous impact of this shortage is no longer optional.

I completely understand that patients worry, and rightfully so. But I encourage us to look at the use of AI in patient care, rather than patients being “treated” with AI tools, which I believe is what most people worry about.

This scenario has been hyped a lot lately, but the fact of the matter is that AI tools are not going to replace doctors anytime soon, and with newer technologies like next-generation AI, we have an exciting opportunity. position to provide much-needed scale for the benefit of both patients and physicians. Human expertise and experience are still important components of healthcare.

Striking the balance between patients who don’t want AI treatment and overwhelmed doctors who turn to AI systems for help is a delicate matter. Patients may worry that their care is being delegated to machines, while doctors may feel overwhelmed by the volume of data they need to review to make informed decisions.

The key is education. Many headlines in the news and online are made to cause disaster and get clicks. By avoiding these misleading articles and focusing on real-life experiences and AI use cases in healthcare, patients can see how AI can complement their doctor’s knowledge. doctors, accelerates access to information and detects patterns that are hidden in data and can be easily missed by even the best physicians.

Furthermore, by focusing on facts and not headlines, we can also explain that this tool and AI are just one tool that, if properly integrated in a workflow, can can amplify the physician’s ability to provide optimal care while keeping the physician in the driver’s seat. interaction conditions and patient responsibility. AI is and can continue to be a valuable tool in healthcare, providing physicians with insights and recommendations to improve patient outcomes and reduce costs.

Personally, I believe that the best way to strike a balance between the AI ​​needs of patients and physicians is to ensure that AI is used as a complementary tool to support clinical decision-making rather than as an alternative. human expertise.

For example, Lumeris technology, powered by AI as well as others, is designed to provide physicians with meaningful insights and actionable recommendations that they can use to guide physicians. guide their care decisions while empowering them to make final decisions.

Additionally, we believe it is essential to involve patients in the conversation around the development and implementation of AI systems, ensuring their interests and preferences are taken into account. Patients may be more willing to accept the use of AI if they understand the benefits it can bring to their care.

Ultimately, it’s important to remember that AI is not a silver bullet for healthcare, but a tool that can help doctors make better decisions, while scaling and transforming healthcare process is exponential, especially with some newer platform models like GPT for example.

By ensuring AI is used appropriately and transparently, and involving patients in the process, healthcare organizations can strike a balance between patient preferences and The needs of doctors are overwhelmed.

Q. What should vendor executives and clinicians be on the lookout for as more and more AI technologies evolve?

ONE. According to the latest AI Index Report released by Stanford, the use of AI in health IT is indeed getting a lot of attention and is a top investment category, but we have a dilemma with the investment strategy. How to be healthcare leaders.

Excitement about the possibilities is pushing us to move fast, but the novelty and sometimes black box nature of technology is raising some alarm bells and urging us to slow down and play it safe. . Success depends on our ability to strike a balance between accelerating the use and adoption of new AI-based capabilities while ensuring deployments are carried out with a level of security. and highest security.

Artificial intelligence relies on high-quality data to provide accurate insights and recommendations. Provider organizations must ensure that the data used to train AI models is complete, accurate, and representative of the patient populations they serve.

They should also be vigilant in continuously monitoring the quality and integrity of their data to ensure the AI ​​provides the most accurate and up-to-date information. This also applies to the use of large pre-trained language models, where the goals of quality and integrity are maintained, even when the approach to authentication is novel.

As I mentioned, bias in AI can have significant consequences in the healthcare sector, including perpetuating health disparities and reducing the effectiveness of clinical decision-making. ready. Supplier organizations should be wary of AI models that do not adequately compensate for deviations.

As AI becomes more prevalent in the healthcare sector, it is important for service providers to be transparent about how they are using AI. Additionally, they must ensure there is human oversight and accountability for the use of AI in patient care to prevent unnoticed errors or omissions.

AI raises a range of ethical considerations in healthcare, including questions around privacy, data ownership, and informed consent. Supplier organizations should take these ethical considerations into account and ensure their use of AI, both directly and indirectly through vendors, is consistent with principles and values. their moral values.

AI is here to stay and thrive, in healthcare and beyond, especially with new and exciting advances in general AI and big language models. It is almost impossible to stop this evolution – and it is unwise to do so, as after several decades of rapid technological adoption in healthcare, we still have no solutions. help reduce the burden on clinicians while providing better care.

In contrast, most technologies have added new tasks and additional work to vendors. With AI, and more specifically with the advent of general AI, we see great opportunities to finally make meaningful progress towards this elusive goal.

However, for the reasons I have listed, we must establish safeguards for transparency, bias, and safety. Interestingly, if taken into account, it is these barriers that will ensure a faster path to adoption by helping us avoid setbacks that could trigger counter-evolutionary overreactions to the environment. with the adoption and use of AI.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a publication of HIMSS Media.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button