Explore our Topics:

Health tech companies volunteer to contain risks of AI

The White House announced the voluntary commitment from leading health tech companies and health systems to address the risks of AI.
By admin
Jan 1, 2024, 11:43 AM

Following on the heels of the Biden Administration’s sweeping executive order on artificial intelligence, a group of more than two dozen leading health systems, payers, and health tech companies have pledged to proactively manage the risk of AI in healthcare and develop safe, responsible tools for improving outcomes and controlling costs. 

 “The Administration is pulling every lever it has to advance responsible AI in healthcare and health-related fields. We cannot achieve the bold vision the President has laid out for the country with US government action alone,” wrote White House officials in a blog post.  

“That’s why we are excited that in response to the Administration’s leadership, leading healthcare providers and payers have today announced voluntary commitments on the safe, secure, and trustworthy use and purchase and use of AI in healthcare. These voluntary commitments build on ongoing work by the Department of Health and Human Services (HHS), the AI Executive Order, and earlier commitments that the White House received from 15 leading AI companies to develop models responsibly.” 

The participants in the latest commitment are a combination of technology developers, payers, health systems, and provider groups that commercialize algorithms after in-house pilots. 

The signatories are: Allina Health, Bassett Healthcare Network, Boston Children’s Hospital, Curai Health, CVS Health, Devoted Health, Duke Health, Emory Healthcare, Endeavor Health, Fairview Health Systems, Geisinger, Hackensack Meridian, HealthFirst (Florida), Houston Methodist, John Muir Health, Keck Medicine, Main Line Health, Mass General Brigham, Medical University of South Carolina Health, Oscar, OSF HealthCare, Premera Blue Cross, Rush University System for Health, Sanford Health, Tufts Medicine, UC San Diego Health, UC Davis Health, and WellSpan Health. 

According to an accompanying fact sheet, the companies will be holding themselves to a number of high standards when developing and deploying AI models, including: 

  • Focusing on AI solutions that optimize health equity, expand care access, improve affordability, and assist with coordinating care, improving experiences, and avoiding clinician burnout 
  • Working collaboratively to align with HHS’s guiding principles for AI, which assert that AI tools should be fair, appropriate, valid, effective, and safe (FAVES)
  • Establishing and deploying trust mechanisms to inform users about AI-generated content and the participation of humans in reviewing or editing AI-generated results 
  • Implementing risk management frameworks that track and account for potential harm and guide steps for mitigating issues
  • Balancing speed with responsibility when researching, investigating, and developing AI-powered tools 

“We must remain vigilant to realize the promise of AI for improving health outcomes,” the White House team wrote.  “Healthcare is an essential service for all Americans, and quality care sometimes makes the difference between life and death. Without appropriate testing, risk mitigations, and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best—and dangerous at worst.”  

“Yet at the same time—so long as we can mitigate these risks—AI carries enormous potential to benefit patients, doctors, and hospital staff. While widespread AI adoption throughout the healthcare sector is a long way off, it is clear, that AI has the potential to positively impact healthcare outcomes and the lives of doctors and patients in myriad ways.” 

The voluntary pledge is part of a broad movement toward more structured guardrails around AI, particularly in the healthcare industry. In both the US and abroad, stakeholders are calling for developers to adhere to ethical codes of conduct and governance principles that will avoid bias as much as possible and produce better outcomes without potentially harming individuals. 

Voluntary commitments from industry leaders are crucial for setting expectations for up-and-coming developers and fostering a safe and equitable technology marketplace. 

Earlier this year, some of the highest-profile Big Tech companies affirmed they would work together to create a beneficial AI environment across their areas of focus, as well.  In July, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI all agreed to take on the challenge. They were joined by Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability in late September. 

“The private-sector commitments announced today are a critical step in our whole-of-society effort to advance AI for the health and wellbeing of Americans,” the White House said of the latest healthcare specialists to join these companies. “These 28 providers and payers have stepped up, and we hope more will join these commitments in the weeks ahead.” 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.