Explore our Topics:

Consumers ask for data transparency in gen AI

Lack of data transparency in generative AI tools is the biggest concern for healthcare consumers, according to a new survey.
By admin
Dec 20, 2023, 9:27 AM

Healthcare consumers are interested in the potential of generative AI to improve their experiences, but they are wary about the lack of data transparency that often comes with these tools, says a new survey by Wolters Kluwer Health. 

Close to nine out of ten Americans (86%) said that see potential problems with not fully understanding where generative AI models get their data or how the results are validated. Nearly half (49%) expressed concern that generative AI might produce false information that could affect their care. A similar number would not trust the results if they knew their own provider was using generative AI as part of their care. 

“As the healthcare community begins implementing generative AI applications, they must first understand and address the concerns Americans have about it being used in their care,” said Greg Samios, President and CEO of Clinical Effectiveness, Wolters Kluwer Health. “It will take responsible testing as well as understanding the importance of using the most current, highly vetted content developed by real medical experts to build acceptance of this new technology in clinical settings.” 

The possibility of errors is a longstanding concern for patients, multiple industry polls have revealed. In March, Pew Research found that 60% of Americans would feel “uncomfortable” with AI being used as part of their personal care decision-making, while a November survey by Medtronic and Morning Consult found that more than 80% of consumers think that the potential for errors is one of the biggest barriers to adoption.  

At the same time, participants in those polls expressed guarded optimism that AI could soon improve the diagnosis process, reduce inequities, and lead to major clinical breakthroughs. 

The most recent data from Wolters Kluwer confirms that consumers are feeling a mix of curiosity and trepidation at the thought of AI being more deeply ingrained in healthcare. Roughly equal numbers of participants said they were concerned about generative AI in healthcare (44%) and curious about how it could have a positive impact on experiences and outcomes (36%). 

Patients would generally feel more comfortable if they knew more about generative AI and the data underpinning these algorithms. Just over 80% of respondents said they would need to know that the developer behind the AI had sufficient experience in the healthcare industry, and 86% would require clinicians to be adequately involved in the development process.  

Close to 90% stated that clinicians need to be clear and transparent about when and where they are using generative AI to support decision-making. Two-thirds would even consider moving to a different provider if they knew their clinician was using generative AI at all, with older respondents more likely to switch than younger generations. 

The insistence on transparency could be important as AI continues to develop rapidly with little in the way of broad oversight, although efforts are underway to create guardrails for the industry both in the US and internationally. AI tools across all applications are prone to bias, including gender and racial biases that could exacerbate health inequities when applied at scale. 

And with few healthcare organizations taking an organized approach to implementing generative AI, there’s a strong chance that low quality products with questionable results could creep into the healthcare ecosystem. 

To gain consumer trust and avoid negative outcomes from substandard products, healthcare organizations need to thoroughly evaluate and test potential tools in real-world settings before full deployment. 

According to the Coalition for Healthcare AI (CHAI), generative AI and other types of machine learning models should meet seven core standards, including equity, transparency, safety, and reliability. Organizational leaders should work closely with AI developers to adopt models that meet high standards of data governance and data provenance to foster transparency and ensure that the use of cutting-edge technologies is a positive step for patients and their outcomes.  

Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.

Show Your Support


Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.