Health

Three steps healthcare organizations can take to use generative AI responsibly



The healthcare lead at research and consulting giant Accenture lays out how to get proprietary data ready, establish the right controls and harmonize people with the tech.

Many healthcare organizations are onboarding generative AI fast and furious. Generative is the kind of AI behind the super popular ChatGPT application.

While it may seem like a miracle technology to many, it is by no means perfected. In fact, it even can have hallucinations (known to us humans as mistakes).

But generative AI can be used in healthcare today – it just needs to be used responsibly.

Rich Birhanzel is consulting giant Accenture’s healthcare lead and knows quite a bit about artificial intelligence. We interviewed him to glean his insights on responsibly using generative AI, which he says involves three things: getting your proprietary data ready, establishing the right controls and harmonizing people with the technology.

Q. You are advising your clients on three key things as they’re starting to think about implementing generative AI in a responsible way. First, get your proprietary data ready. Please elaborate on this.

A. Large language models behind generative AI can process massive data sets, which allows them to potentially “know” everything an organization has ever known. Anything conveyed through language, including applications, systems, documents, emails, chats, video and audio, can be used to drive next-level innovation, optimization and reinvention.

At this point, most organizations are starting to experiment by consuming “off the shelf” foundation models. The biggest value will come when organizations customize or fine-tune models using their own data, allowing them to address their unique needs.

However, customizing foundation models will require access to domain-specific organizational data, semantics, knowledge and methodologies. While successful deployments of machine learning and AI have always been tightly interwoven with the quality of the underlying data, the vast scale of data ingested by large language models places an even higher standard for an organization’s data foundations.

Foundation models need vast amounts of curated data to learn, and that makes solving the data challenge an urgent priority for every organization. Especially in healthcare, they will need a strategic and disciplined approach to acquiring, growing, refining, safeguarding and deploying data.

This challenge is compounded by the sensitivity of personally identifiable information (PII), and protected health information (PHI) in potential training data, and the need to eliminate bias in the datasets that are curated to fine-tune these models. Additionally, while progress in interoperability regulations (for example, 21st Century Cures Act, CMS Interoperability and Patient Access) has moved the needle on healthcare data standards, healthcare continues to lag other industries in terms of the availability of structured, high-quality data.

Healthcare organizations will need a strategic and disciplined approach to acquiring, growing, refining, safeguarding and deploying data. That can be achieved with a modern enterprise data platform built on cloud with a trusted, reusable set of data products.

Q. Your second piece of advice is to establish the right controls. What do you mean by this?

A. The rapid adoption of generative AI brings fresh urgency to the need for healthcare organizations to define, develop and articulate a responsible AI mission and principles. At the same time, they must establish a transparent governance structure that builds confidence and trust in AI technologies.

It’s critical to embed responsible AI approaches throughout, starting with controls for assessing the potential risk of generative AI at the design stage.

Responsible AI principles should be led from the top and translated into an effective governance structure for risk management and compliance, both with organizational principles and policies and applicable laws and regulations.

That includes strengthening compliance with current laws and regulations while monitoring future ones, developing policies to mitigate risk, and operationalizing those policies through a risk management framework with regular reporting and monitoring.

To be responsible by design, organizations need to move from a reactive compliance strategy to the proactive development of mature responsible AI capabilities, including principles and governance; risk, policy and control; technology and enablers; and culture and training. It’s important to focus on training and awareness first, and then expand to execution and compliance.

Q. And your third bit of advice is to harmonize people with the technology. How so? And why is this important for generative AI?

A. Generative AI applications in healthcare will depend on people to guide them based on human experience, perception and expertise. Processes will need to be refined to embrace generative AI capabilities and elevate the role of the human worker.

Healthcare organizations need training programs to help workers – from clinicians to administrative staff – keep up with advances in AI, which require more cognitively complex and judgment-based tasks. For example, doctors that interpret health data will need more technical knowledge of how AI models work to have confidence in using them as a “copilot.”

In areas of healthcare where generative AI shows most promise, organizations should start by decomposing existing jobs into underlying bundles of tasks. Then assess the extent to which generative AI might affect each task – fully automated, augmented or unaffected.

For example, we’re already seeing how generative AI can reduce the burden of healthcare documentation on human workers. Radically rethinking how work gets done and helping people keep up with technology-driven change will be two of the most important factors in realizing the full potential of this step change in AI technology.

Q. Where do you see generative AI in healthcare in five years?

A. This is a pivotal moment. For several years, AI and foundation models have been quietly revolutionizing the way we think about machine intelligence. We’re at the start of a very exciting era that will fundamentally transform the way information is accessed, how clinician and patient needs are served, and how healthcare organizations are run.

Accenture research shows nearly all healthcare provider executives (98%) agree advancements in generative AI are ushering in a new era of enterprise intelligence. They’re right to be optimistic about the potential of generative AI to radically change how healthcare is delivered, improving access, experience and outcomes.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button