Explore our Topics:

Three wishes for the AI Genie: Healthcare providers speak up

In focus groups held by Oracle Health, providers laid out their AI needs for governance, regulation, and human-in-the-loop.
By admin
May 1, 2024, 1:59 PM

Editor’s note: This is the first in a series of articles, powered by CHIME’s Digital Health Insights and sponsored by Oracle Health.

The healthcare industry is on the cusp of a transformative era, driven by artificial intelligence (AI). From automating administrative tasks to revolutionizing patient care, AI promises significant improvements in efficiency, accuracy, and ultimately, patient outcomes. However, navigating the vast landscape of AI solutions presents unique challenges for healthcare providers. Based on feedback from provider focus groups hosted by Oracle Health, here are three key wishes that, if granted, could pave the way for a smoother and more successful integration of AI in healthcare organizations.

 

Wish #1: Guardrails – Internal governance

The first wish is for comprehensive governance and policy. The focus groups conducted by Oracle revealed a strong desire for specific governance structures for AI and for data — AI is useless without quality data.

CHIME’s Digital Health Most Wired survey found only 40% of healthcare organizations have governance around AI.

“Do we have AI policies in place? No, not yet,” said one small provider in the Oracle focus group. “We don’t have a group that’s governing AI.”

Of course, there is no magical AI genie, so here are some considerations for providers facing the task of developing clear AI governance:

  • Identifying responsible parties: Teams or committees should be multidisciplinary, comprising clinicians, IT professionals, legal counsel, and ethicists.
  • Establishing approval processes: Creating a structured process for approving AI implementations, considering factors like potential impact, data security, and legal/regulatory compliance.
  • Defining oversight scope: Determining the level of human oversight required for different AI applications, ensuring patient safety and addressing potential biases.
  • Product and vendor selection: Identifying and selecting knowledgeable vendors and appropriate AI tools based on specific needs and risk profiles.
  • Implementation: Designating who oversees the implementation process to ensure proper integration with existing systems and workflows.
  • Monitoring and auditing: Continuously monitoring AI performance, identifying and mitigating potential biases, and ensuring data security and privacy.

Access to AI tools, especially the large language models (LLMs) and training data, is an area where adaptive, intelligent governance is warranted, according to Pavan Jain, Vice President, Product Management at Oracle health (blog). Among the key elements of this approach are restricted/controlled access to LLM training datasets, regular monitoring and auditing of dataset use, and preventing deep fake identities from accessing digital assets. Read more on Jain’s blog post.

Further, governance to promote responsible AI should lay out requirements for vetting vendor partners, including a track record of accuracy and evidence of testing models across diverse demographics and situations.

 

Wish #2: Frameworks – Federal governance

As for regulatory guardrails, there are efforts throughout the federal government to establish legislation, regulations, standards, and guidelines for the use of artificial intelligence, especially in healthcare.

FDA is focused on AI in medical products —biological products, drugs, and medical device — to safeguard public health while fostering responsible and ethical innovation.

President Biden’s AI-focused Executive Order mandates new standards on AI safety and security, calls for federal guidance and best practices to address potential algorithmic discrimination, requires HHS to establish an AI safety reporting program and educational resources, and urges Congress to pass bipartisan data privacy legislation.

The House of Representatives launched a bipartisan AI taskforce to inform the chamber’s approach to legislation. “As we look to the future, Congress must continue to encourage innovation and maintain our country’s competitive edge, protect our national security, and carefully consider what guardrails may be needed to ensure the development of safe and trustworthy technology,” said Speaker Mike Johnson.

Among proposed bills in Congress, the AI and Critical Technology Workforce Framework Act (S. 3792) would require NIST to develop an AI workforce framework for industry, government, research, nonprofit, and educational institutions.

One of the expert witnesses in an early 2024 Senate Finance Committee hearing on AI in healthcare was Michelle Mello, JD, PhD, Professor of Health Policy and Law at Stanford University, who urged a legislative focus not just on algorithms but how they will be integrated in clinical workflows. “Regulation and governance should address not only the algorithm, but also how the adopting organization will use and monitor it,” she testified.

This leads to the third wish.

 

Wish #3: Appropriate use – better workflows and training

Healthcare providers yearn for AI integration that complements, not replaces, their expertise. While automation offers undeniable benefits, concerns linger about the potential for AI to erode the human touch in medicine.

“If [AI tools] are not in a clinician’s workflow or an operational workflow, our people aren’t going to use them and aren’t going to really benefit from them,” said one provider in the Oracle focus group.

AI should seamlessly integrate into existing workflows, assisting with tasks like data analysis and generating preliminary reports, freeing up clinicians’ time for higher-level thinking and patient interaction.

Another provider remarked, “I don’t know that the pace of our training for our nurses and our doctors and our other clinicians is helping them understand how to use AI response.”

While the regulatory frameworks wish relies heavily on the government leaders to “grant” — contributing to advocacy efforts could help drive regulators and legislators to positive results — for this third wish, the industry needs to be its own AI genie.

To do this, organizations can build as strategy around three key areas: workforce integration, comprehensive training, and model-performance.

The first step is to involve doctors and nurses in designing AI-integrated workflows to ensure the tools solve their pain points and fit smoothly into the way they currently work. The focus should be on augmentation not replacement, which means repetitive, data-heavy tasks can be relegated to AI. An example of this is Oracle’s new Clinical Digital Assistant, which integrates in the EHR and uses generative AI with voice commands to reduce manual work so clinicians can focus more on patient care. Learn more here.

Training should include real world case studies that demonstrate how to balance AI insights with clinical expertise; emphasize identifying AI’s limitations as much as its strengths. Treating AI education as a continuous process for all clinicians helps to keep them up-to-date as the technology rapidly evolves.

 

Small wins, big future

A resounding call to action coming from the focus groups was the certainty that despite the numerous hurdles to AI adoption in healthcare, as well as an expanding wish list to improve change management, organizations need to recognize AI is a powerful disruptor that is here to stay and waiting on the sidelines is not an option.

For many, starting with small wins not only provides valuable experience and helps to develop and refine strategies and policies, but it also helps builds confidence and trust among users and leaders, especially those holding the purse strings.

This usually means starting with lower risk, higher reward uses. Providers in the focus groups noted finance as one such area. One provider in the group reported success in using AI to shorten exam time with outpatient imaging and MRI in the radiology department. Another called early detection a ripe area: “One of our flagship wins has been for early lung cancer detection, using AI to read chest x-rays and chest CT scans.”  AI-driven real-time text-to-speech (and vice versa) is another small win highlighted by one provider.

The steps in this healthcare AI journey may be big or small, but the win (for now) is in getting off the bench and into the race at any pace. By ensuring AI complements the human touch and follows established internal and external governance around responsible and safe use, healthcare can achieve a future where technology empowers providers to deliver exceptional, patient-centered care. No AI genie needed.

 

About Oracle Health

At Oracle Health, everything we do is dedicated to helping people live healthier lives and improving healthcare. By connecting clinical, operational, and financial data across the ecosystem, we can help providers improve patient outcomes, access data-driven, actionable insights, reduce costs, and unleash innovation. Integrated technologies, data, and analytics empower patients in their health journey, inform clinician decision-making, and accelerate research to advance health and well-being for people worldwide.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.