Explore our Topics:

AI algorithms show a race problem

AI is a powerful tool for predicting outcomes and supporting decision-making but biased data may cause more problems than AI solves.
By admin
May 12, 2022, 8:00 AM

Artificial intelligence (AI) has taken the healthcare industry by storm with its promises for more accurate, timely predictions, process automation, and precise decision support for clinicians and administrators alike.  

So far, AI is certainly proving its value across a number of important use cases – but it’s also showing critical vulnerabilities that need to be addressed sooner rather than later.  Chief among these potential problems is unintentional bias in the results.  

Bias occurs when the training data sets for AI algorithms are incomplete, low quality, or skewed in a particular direction.  This can lead to questionable outputs and suboptimal decisions for patients and health systems.


Related story: Mayo Clinic is using AI to solve the unstructured data problem for all


How can healthcare leaders prevent potential biases from influencing decision-making and perpetuating health inequities?

How does bias creep into AI algorithms?

All artificial intelligence algorithms are man-made by human computer scientists who develop, train, and optimize their applications for real-world use.  

Humans bring with them a complex universe of personal experiences, opinions, skills, and predispositions that make it impossible to be truly objective or one hundred percent accurate in their assigned tasks.  

These “imperfections” are propagated through the healthcare system, from the clinician who sits down at the keyboard to record patient data to the computer scientist who curates the training dataset to the CIO who decides that an algorithm is ready to deploy in her organization.

Sometimes, undesirable results are just an issue of poor programming and can be fixed with some tweaks to the code.  But other times, the biases go deeper and can start to influence – or exacerbate – health inequities among the patients they are supposed to serve. 


Related story: Building techquity into the digital healthcare delivery system


One recent study in PLOS Digital Health examined more than 7000 academic papers about AI algorithms and found that the 40 percent of databases came from the US and 13 percent from China. Authors were primarily from China (24 percent), the US (18 percent), or other high-income countries. Sixty percent of first authors were statisticians, not clinicians. And close to three-quarters were male. 

When perspectives and datasets are overly limited, AI tools become vulnerable to algorithmic bias, defined by a team of Harvard researchers as “the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems.”

For example, a Ugandan research team found that neural networks used to identify skin lesions were mostly trained on images of white patients, with Black individuals comprising just 10 percent of the training data. Since skin lesions often appear very differently on darker skin, the resulting algorithm may not accurately identify issues in non-white patients.

Solutions for preventing built-in biases

As health equity takes center stage for the healthcare industry, organizations must be sure that they are using their growing collection of AI algorithms in a positive manner. Fortunately, the vast majority of executive leaders – 94 percent in a recent Optum poll – acknowledge the duty to use AI responsibly and are already putting plans in place to do so.

Developers and users must acknowledge and address their potential biases when architecting and implementing AI algorithms in the real-world care setting.

An international team of experts suggests a three-pronged approach to creating an equitable and transparent AI ecosystem:

Engaging in patient-centered, participatory development of AI tools: Computer science teams should include perspectives from diverse communities, including patient representatives, clinicians, and other relevant parties, when testing and refining their algorithms.  

Ensuring responsible data sharing and inclusive data standards: The industry should work together to develop common metrics for AI reliability, consider launching clinical trials to objectively test AI tools, and establish minimum standards for the diversity and inclusivity of data 

Sharing code and prioritizing explainability: AI code doesn’t always have to be open source, but developers should consider integrating outside perspectives into the design and training process. AI algorithms should also be explainable, as much as possible, and have clear mechanisms for tracing data provenance and usage.

By integrating more inclusive data into AI training sets and establishing industry-wide protocols for transparency and reliability, the healthcare industry can avoid unwanted bias, close gaps in health equity, and ensure that all AI tools provide optimal decision support when put into practice. 

 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.