Explore our Topics:

What role should patient outcomes play in AI regulation?

Patient outcomes might be getting lost in the shuffle of all the regulatory metrics applied to new AI tools, a new JAMA viewpoint argues.
By admin
Feb 22, 2024, 2:21 PM

As artificial intelligence becomes more sophisticated and more common across a number of healthcare use cases, international bodies and government agencies are starting to put regulatory scaffolding in place to guide the safe and effective development of new tools. But experts from the University of California San Diego (UCSD) argue they might not be focusing enough on the most important question of them all: do AI tools actually improve patient outcomes the way they’re supposed to? 

In a new viewpoint article published in JAMA, lead author John W. Ayers, PhD, and colleagues from UCSD, advocate for a “regulatory refocus” that incorporates a much stronger emphasis on patient outcomes in addition to the current strategy of monitoring process measures. 

“We propose a regulatory strategy for AI that is outcome-centric by requiring companies to demonstrate that AI tools produce clinically important differences in patient outcomes before being brought to market,” the team writes. “In doing so, regulators can stop attempting to prescribe technical requirements via process-centric regulations and instead focus on what AI should achieve for the benefit of patients.” 

Process measures versus patient outcomes measures in AI regulation 

The fundamental issues stem from the ongoing tension between process measures and outcome measures – a familiar battle for health IT developers and healthcare quality improvement professionals. 

In the current artificial intelligence environment, regulatory frameworks take a process-based approach by mandating compliance with specific procedures intended to ensure that a model is up to code during development.  

For example, the recent executive order on AI regulation “calls for AI-powered algorithms with more than 10 billion parameters to potentially register their codebase with federal agencies, perform a battery of to-be-determined regulator-selected tests, and report codebase modifications for review,” the article explains. 

But it does not “appropriately address decades of FDA regulatory pathways that emphasize studying patient outcomes and using these assessments to grant entry to the marketplace,” the team assets.  

In other words, the model could be developed, trained, and tested according to all required technical processes, but the government might not be spending the same effort on monitoring the real-world impact of these models as a condition for approval and deployment in settings that affect actual patients. 

The authors give the example of AI-powered early warning systems for identifying patients at risk of developing the life-threatening condition and ensuring they get the appropriate antibiotics in a timely manner. They point out that while these models have become popular, developers haven’t been required to produce evidence that back up their utility on the front lines.  

For example, a third-party evaluation of the widely adopted Epic Sepsis Model found that only 7% of the 2552 hospitalized sepsis patients who did not already receive early treatment were identified by the system. In addition, the system did not identify 67% of patients who developed sepsis. 

“Would hospital administrators have as frequently implemented this system if rigorous performance assessments were required by regulators before the system was marketed? Given there are many early warning systems for sepsis, how will hospital administrators know which system to implement without regulators requiring comparative outcomes data?” the authors ask. 

An outcomes-centric approach to implementing AI regulation 

The authors advocate for a much more outcomes-heavy approach to regulating AI models, especially as they begin to be applied in earnest to clinical situations.   

“We believe AI regulatory assessments should be grounded in clinical evidence regarding how patients feel, function, or survive in rigorously designed studies, such as randomized clinical trials, which is consistent with regulatory standards applied to new drugs that also require a net clinically meaningful improvement in patient outcomes compared with a placebo,” the team states. 

Regardless of how well they work in technical tests in the lab, AI developers should be required to demonstrate that their models provide clear and measurable clinical benefits over the standard of care before being brought to market. 

“Consider the use of such systems for patients with heart failure,” they write. “Although the progression of many diseases could benefit from improved communication, outcomes for these patients are strongly associated with adherence to their care plans. As a result, a study that compares standard care vs standard care enhanced by AI conversational agents could be powered to find whether there are differences in mortality or acute care use in this patient population.” 

While making this shift to an outcomes-centric regulatory environment would likely require significant investment in resources from government agencies, it could make all the difference for patients – and for the healthcare delivery systems that are investing billions in AI tools at the moment. Paying more attention to outcomes up front could spare the industry from another round of “buy first, find the ROI later” headaches as experienced during the decade long EHR modernization process. 

“We envision that a regulatory strategy focused on patient outcomes is only the first step and that regulators will learn much about how AI tools can be used to address health equity, access, safety, and other regulatory benchmarks from early outcome-centric evaluations,” the article concludes. “In response to the White House executive order, we urge rule makers to curtail process-centric regulations that could impede AI progress and, instead, champion outcomes to advance AI for the betterment of people’s lives.” 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.