Explore our Topics:

Hospitals ditching “set it, forget it” AI tools for ones that can keep learning

HCOs are abandoning static AI deployments for dynamic systems that continuously adapt, but cost and implementation barriers remain.
By admin
Jul 25, 2025, 12:52 PM

Most of the artificial intelligence (AI) tools flooding hospitals today are designed to be static, arriving “finished” like medicine and medical devices. But a growing chorus of researchers and health system leaders are arguing that this approach to AI tools is all wrong.

“The current regime of linear AI deployment has largely failed to keep up with the pace of technological development and is a poor fit for the emerging paradigm of interactive, adaptive, multi-agent AI systems,” researchers noted in a recent study published in Nature. “We propose dynamic deployments as an alternative framework for medical AI deployments which are continually learning and adapting in response to new data, shifting focus beyond individual AI models towards a system-level perspective.”

“Dynamic deployment” as referred to in the study is a framework where AI systems can learn and grow in real-time, making them more capable of adapting to the unique needs of each hospital or healthcare setting where they are being used.

Those pushing for a dynamic deployment framework see immense benefits in allowing AI systems to continuously change based on real-world use. This approach would be a radical departure from how healthcare has traditionally adopted new technology, but it’s gaining traction as health systems grow frustrated with AI tools that promise the moon and barely leave the launchpad.

AI systems need room to grow

To this point, the deployment of AI tools has mostly followed the standard playbook used for medical devices: develop in the lab, test briefly, and deploy with fixed parameters. Once approved and installed, the systems stay locked in place, their parameters unchanging until the next software update months or years later. That approach made sense for early AI tools that performed narrow tasks, like reading chest X-rays.

But today’s AI systems, especially large language models (LLMs) powering chatbots and documentation tools, operate differently. These systems can learn from every interaction, adapt to new scenarios, and improve their performance based on user feedback. “Freezing” them essentially wastes their most powerful capability.

“AI systems have an important difference from other technologies in medicine: they are adaptive. In fact, one of the most important attributes of modern LLMs with billions of parameters is their flexibility,” the study’s authors explain. Modern AI can update its behavior through techniques like reinforcement learning from human feedback and continuous fine-tuning with new data.

The mismatch is particularly stark in hospitals, where AI systems must navigate complex workflows, diverse patient populations, and constantly changing clinical environments. A documentation AI trained on one hospital’s data might struggle at another facility with different EHR systems and physician preferences.

Cleveland Clinic has become a poster child for the dynamic approach. Rather than deploying static AI scribes, the health system enlisted 4,000 physicians to use Ambience Healthcare’s platform, allowing the AI to adapt to different medical specialties and clinical contexts. The system learns from each encounter, gradually improving its performance for specific use cases and user preferences.

The promise (and its price tag)

Health systems implementing dynamic AI approaches report positive outcomes as they move beyond static deployments.

IDC’s Industry Tech Path 2024 survey found that 40.6% of healthcare organizations now have AI-enabled clinical decision support systems in production, up from proof-of-concept stages just months earlier. The organizations seeing the most success treat AI as an ongoing process rather than a one-time deployment.

The transition to a new framework isn’t easy, however. Dynamic AI systems require sophisticated infrastructure for continuous monitoring, feedback collection, and performance assessment, and hospitals will have to invest in data pipelines, computational resources, and governance frameworks that many currently lack. 

Cost concerns loom large. As AI usage expands, expenses for cloud computing and model hosting can escalate quickly, and because many leading AI models remain proprietary, hospitals are limited in their ability to customize systems based on their specific needs without paying premium fees.

Oversight agencies are scrambling

The regulatory landscape adds another complication. The FDA has authorized nearly 1,000 AI and machine learning-enabled medical devices, but most followed traditional approval pathways designed for static technologies.

The agency is starting to adapt, releasing guidance on “predetermined change control plans” that allow certain AI modifications without new approvals. But the framework remains incomplete, leaving many health systems uncertain about compliance requirements for continuously learning systems.

The Coalition for Health AI (CHAI), which includes over 3,000 healthcare organizations, is working to fill the gap. The nonprofit recently partnered with The Joint Commission to develop standards and certification programs for AI deployment.

“The integration of AI into healthcare presents both significant opportunities and complex challenges. This effort between The Joint Commission and the Coalition for Health AI represents a thoughtful approach to navigating how to best deploy and implement these emerging technologies,” noted Michael Pfeffer, M.D., Chief Information and Digital Officer, Stanford Health Care.

The innovators refuse to wait around

Despite the challenges, market momentum is building toward a dynamic approach to AI tools. The FDA approval pipeline reflects this shift: radiology leads with 750 out of 950 approved AI devices, followed by cardiology with 98 devices and neurology with 34 devices. These technologies offer significant benefits in image analysis, pattern recognition, and diagnostic accuracy.

Some health systems are moving faster than others. Academic medical centers with robust IT infrastructure and research capabilities tend to embrace dynamic deployment more readily than smaller community hospitals with limited technical resources.

For health IT leaders considering the transition, experts recommend starting small with well-defined use cases. AI documentation tools and clinical decision support systems offer good entry points because they generate clear feedback signals and measurable outcomes.

Success requires more than just technology upgrades. Organizations must invest in training programs to help clinical staff adapt to continuously learning systems, develop governance frameworks for ongoing oversight, and establish metrics for measuring AI performance over time.

“The next generation of medical AI will similarly be ushered in when we step back from individual models and instead focus on the larger picture of adaptive systems and networks, building upon the core principles of safety, real-world evidence, and regulatory oversight,” the study’s authors concluded.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.