For effective AI governance in healthcare, it’s time to start asking the hard questions
Let’s be honest: governance isn’t always the first thing people want to talk about when artificial intelligence enters the conversation. They’d rather focus on the wonderous possibilities for clinical improvements, administrative efficiencies, and patient experience enhancements: the tangible, forward-facing, high-value outcomes that AI is designed to help healthcare organizations achieve.
However, those conversations aren’t going to get very far without talking about the fundamentals – and truth be told, there’s a lot more to AI governance that meets the eye.
Contacting terms, algorithmic decision trees, and detailed data use agreements are part of it, to be sure. But in a complex industry under constant threat from cybercriminals, malpractice lawsuits, and plain old human error, governance is about building and maintaining the relationships that underpin the entire fabric of safety, security, sustainability, and success of the healthcare community.
“Governance is one of those areas that’s evolving in a fascinating direction at a very fast pace,” said John Friesen, Associate Chief Legal Officer & Chief Privacy Officer at Geisinger Health System, at the Trace3 Evolve Technology Conference held in October.
“If your legal and compliance partners aren’t excited about AI and what it means for our patients, they’re in the wrong business. t’s an extraordinary time to be in this space and promote transparency, accountability, privacy, security…all of those building blocks that can make or break an implementation.”
That’s because AI governance is about much more than completing a legal check list. Strong governance – or the lack thereof – has immense downstream impacts for both patients and the health systems who serve them.
“It’s got ethical implications, and it’s also got legal implications,” Friesen continued. If a government investigator or a plaintiff’s attorney asks why an AI made a recommendation that may have contributed to a negative outcome, you need to be able to explain exactly what happened.. you don’t know how your tools work, or you don’t have a strong framework for using them appropriately, you simply can’t be using those systems in healthcare.”
Asking the hard questions of partners and peers
For Jason Peoples, Director of Technology and Innovation at Mary Free Bed Rehabilitation Hospital, close relationships with legal, compliance, and risk management professionals during the AI implementation process have been essential for making sure all the doors and windows are secured against threats, both internal and external.
“AI is forcing us to ask some hard questions of our vendors and partners that have a lot more importance than ever before,” he said. We’re all having to get very open and honest about our risk tolerance and how we want our data to be used inside and outside our controlled environments. Every organization needs to start establishing best practices that match those tolerances and preferences, because robust governance and a strong compliance program are non-negotiables in the AI-enabled healthcare ecosystem.”
“We need to be able to make the most informed business decisions possible, so we need to fully understand every variable in our business associate agreements: who’s responsible for what; what’s happening with our data; and what the liabilities are if something goes awry,” he said.
A methodical approach to AI governance to promote accountability
AI governance in healthcare needs to be structured and standardized, with involvement from clinicians, business leaders, compliance and legal, and other stakeholders to ensure buy-in across the entire care continuum. These relationships with internal experts can ensure clarity and alignment throughout the enterprise, which in turn translate to productive partnerships with vendors or co-developers.
At DaVita Kidney Care, CMO Adam Weinstein, MD, makes his “governance pyramid” the first port of call when considering a new AI application. This structured approach involves running each potential use case through a checklist of key criteria, including whether the organization has the data and foundational technologies to support the use case, whether launching a pilot is tied to specific clinical and business goals, and whether the pilot will directly facilitate improvements in care delivery.
“My team is responsible for building this framework and ensuring that no matter what happens with clinical AI at DaVita, that we are using this as our North Star for delivering on those projects,” he said.
Decision-making at DaVita is also rooted around a set of AI governance principles that promote transparency, traceability, and accountability in data and decision-making.
Among the key principles are keeping humans in the loop, articulating data provenance, stating the degree of data transformation, making confidence estimates available when appropriate, enacting clinical monitoring activities, and creating clear pathways for “emergency brakes” when a human user believes there may be a problem.
“It is very important to emphasize these factors consistently throughout the decision-making process,” Weinstein stressed. We communicate with our folks early and often, and even start our meetings with these principles to ensure they are part of our thinking process so we don’t end up in places we don’t want to be.”
Getting granular with governance to support long-term success
Friesen offered some detailed advice about how to start scrutinizing the data use agreements potential partners will put forth when negotiating an agreement with a health system or health plan.
“If the tech partner tries to insert language about owning the data in perpetuity to develop additional products, that’s always concerning,” he said. “If they say they’re only going to use deidentified data to do that, it’s still a problem. We’ve all learned that deidentified data can be reidentified pretty easily. If they try to say it’s going into a data lake and can’t be purged once the project’s over, that’s not true. You can tag data anonymously and get rid of it at the end of a project.”
“As a care provider or health plan, you have to be comfortable with the terms of any agreement you enter into, because trust is an essential element of co-development.”
Clear ground rules and expectations around transparency, privacy, security, bias, and drift should also be top of mind during the process, he advised, in addition to paying close attention to a partner’s claims around the utility, accuracy, and ongoing fidelity of an AI model or platform.
“If you want to accelerate AI safely and effectively, bring in your compliance and legal people early and often,” he concluded. They should be at the table early in the development of this technology or when choosing vendors, and should be part of a multidisciplinary, clinician-led team that has a clear and standardized checklist to go through as they vet AI tools.”
“We know that the best defense is a good offence, and robust governance is some of the best protection you can invest in right now as the AI ecosystem is developing.”
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at [email protected].