Explore our Topics:

Experts call for flexible regulation on AI

Healthcare leaders at the Senate hearing on AI also expressed concern regarding ethics, adoption, and reimbursement.
By admin
Feb 12, 2024, 9:38 AM

During the “Artificial Intelligence and Health Care: Promise and Pitfalls” Senate committee hearing on Thursday morning, healthcare leaders and AI experts nationwide voiced their primary concerns about integrating artificial intelligence into healthcare. They explored the potential benefits and challenges of AI, focusing on ensuring its responsible and equitable application in the healthcare sector. 

Regulation 

Witnesses called for regulation to be both robust enough to be effective, but flexible enough to withstand the rapidly advancing environment of AI in healthcare.  

“It would be a mistake to enshrine in legislation detailed standards for healthcare AI tools and how they can be used. In light of how quickly things are moving in the field, we have to have the humility to acknowledge that we don’t know what the best standards will be two years from now,” Michelle M. Mello, JD, PhD., Professor of Health Policy and Law at Stanford University shared. “Regulation needs to be adaptable or else it will risk irrelevance—or worse, chilling innovation without producing any countervailing benefits.” 

Shen added that the regulatory framework must not get in the way of innovation and adoption.  

“While we believe the current regulatory framework is sufficient to support AI innovation, we support the continuation of flexibility in the approval process, as a one-size-fits-all approach could seriously inhibit the potential of AI, as well as efforts to facilitate global harmonization and the development of appropriate international consensus standards,” added Peter Shen, Head of Digital & Automation For North America at Siemens Healthineers.  

Ethics and equity 

Zaid Obermeyer, MD and Blue Cross of California Distinguished Professor at UC Berkeley shared concerns about a “family of poorly-designed AI algorithms” with “large-scale racial bias” that are employed widely in the public and private sectors.   

“Unfortunately, many of the biased algorithms we studied remain in use today. And similar dynamics were highlighted by a recent investigation of AI products used to deny claims: in all these cases, AI learns from historical data, with all its biases and inequities, and encodes those past practices in policy,” Obermeyer said. “So those underserved patients whose claims have been denied by humans in our past datasets—often for unjust reasons—will have their claims denied by AI at scale, forever, unless we can re-align AI with our society’s goals.” 

As a solution, Obermeyer suggests that Medicare, Medicaid, and CHIP – who would deeply benefit from AI – should use their market power to demand better AI tools with better algorithms.  

Adoption challenges 

Healthcare AI adoption should be treated similarly to the way Congress treated EHR adoption 15 years ago, proposed Mark Sendak, MD, MPP, Co-Lead of the Health AI Partnership. Government funding for technical assistance and infrastructure programs can help produce widespread adoption of AI-enabled healthcare for hospitals across the country, not just for those with more resources.  

“Most healthcare organizations in the US need an onramp to the AI adoption highway. They are struggling with clinician burnout. They face razor thin or negative margins. They are entirely dependent on external EHR vendors for technology expertise and assistance,” said Sendak. “Simply put, they do not have the resources, personnel, or technical infrastructure to embrace guardrails for the AI adoption highway.” 

Lack of reimbursement 

While a plethora of studies and institutions recognize the value of AI in healthcare, there is currently no standardized path to AI adoption and no consistent reimbursement for AI-powered  healthcare services.  

“Guaranteeing a consistent reimbursement process would empower providers to invest in AI confidently, ensuring their services are appropriately reimbursed,” said Shen. “Without this financial support, these providers will face difficulties in embracing and integrating AI technologies, ultimately potentially denying revolutionary services to patients.” 

The lack of consistent reimbursement makes AI-powered healthcare out of reach for many healthcare organizations, but especially in rural and underserved areas.   

AI in practice 

Governance must go beyond algorithms, argued Mello. Regulation must include both how organizations use and monitor AI in healthcare.  

“For example, large-language models like ChatGPT are employed to compose summaries of clinic visits and doctors’ and nurses’ notes, and to draft replies to patients’ emails. Developers trust that doctors and nurses will carefully edit those drafts before they’re submitted—but will they?,” Mello asked. “Research on human-computer interactions shows that humans are prone to automation bias: we tend to over rely on computerized decision support tools and fail to catch errors and intervene where we should.” 

Katherine Baiker, Ph.D. and provost at the University of Chicago, added that AI tools need only to supplement, not replace clinicians in healthcare.  

“Physicians can draw in information that algorithms alone can’t, including the results of real-time patient exams. It is crucial that algorithms be tested and validated in relevant contexts, just like any other medical Intervention,” Baiker stated.   


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.