Explore our Topics:

WHO releases guidance on AI ethics and governance

The WHO’s guidance on AI ethics and governance calls on world leaders to regulate the creation and implementation of LMMs in healthcare.
By admin
Jan 23, 2024, 2:53 PM

Jump to section:

 

 

On Thursday, the World Health Organization (WHO) published guidance addressing the ethics and governance of Large Multimodal Models (LMMs) in healthcare.  

LMMs, a popular and rapidly evolving facet of artificial intelligence, are gaining prominence in healthcare due to their ability to analyze and integrate diverse data inputs, including images, text, and videos and respond in various outputs, irrespective of the input.  

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Jeremy Farrar, PhD, WHO Chief Scientist in a statement. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.” 

WHO guidance for governments

The WHO assigns governments the “primary responsibility” to regulate the creation and implementation of LMMs and ensure that their application in public health and medical environments aligns with these regulations. In particular, governments should: 

  • Provide funding or establish non-profit or public resources, such as public datasets. These resources should be available to developers across various sectors, with a requirement to comply with ethical norms and values for access. 
  •  Implement legal frameworks and regulatory measures ensuring that LMMs and their healthcare applications adhere to ethical standards and human rights considerations, including respect for individual “dignity, autonomy, or privacy,” regardless of the perceived risks or benefits of the AI technology. 
  • Designate or establish a regulatory body responsible for evaluating and approving LMMs and their healthcare applications, based on available resources. 
  • Enforce obligatory audits and impact assessments post-deployment of LMMs, focusing on data protection and human rights. These assessments, conducted by independent entities, should be made public and include detailed analyses of outcomes and impacts, differentiated by user categories such as age, race, or disability. 

WHO guidance for developers

The WHO also advises developers to include all relevant stakeholders including healthcare providers, patients, researchers, and other medical professionals to be involved in the development of an AI/ML tool from the beginning. This should be done through a process that allows these participants to discuss ethical considerations, express their concerns, and contribute insights relevant to the AI application being developed. 

LMMs should be precisely engineered to execute specific tasks with high accuracy and dependability with the intention of improving healthcare delivery. Developers should have the capability to foresee and comprehend the possible indirect effects of these models. 

Patient safety concerns and closing health equity gap

Patient safety is a paramount concern, with overwhelming evidence that AI and machine learning models reiterate the biases in the data from which they are trained. The report lists bias, both in the training data and responses, is one of the top potential risks of using LMMs in diagnosis and clinical care. 

“One critical risk is the tendency of LMMs to ‘hallucinate’, or to generate incomplete, inaccurate, or false statements,” shared Rohit Malpani, Researcher for the Health Department at the WHO and lead writer of the guidance. “This could jeopardize patient safety if a health care provider relies upon a false or inaccurate output, or if patients or caregivers rely upon such hallucinations. Patients and caregivers are especially vulnerable since they may neither have the scientific expertise nor medical judgment to discern an inaccurate statement.” 

Cybersecurity and data privacy concerns

The guidance calls on governments to implement data protection laws and for developers to do their part in collecting data with data privacy in mind. 

“Large multi-modal models may also present risks to data privacy.  This includes data on which a large multi modal model is trained, the data that an individual – whether a health provider or patient – may input into a large multi modal model, or the output of the large multi modal model,” said Malpani. “There are already numerous anecdotes of large multi modal models disclosing sensitive information if prompted. Yet we know there are solutions.” 

Existing data privacy laws, like HIPAA in the United States will apply to emerging LMM and machine learning solutions for healthcare and protect patient privacy, Malpani said.  

“There are also on-going efforts, developed within academia and by industry, including mechanisms to reduce and detect errors, and to integrate privacy-preserving technologies into the design of LMMs.” 

Technology outpacing regulation

The WHO’s guidance aligns with the efforts of various government entities that are actively seeking methods to regulate this emerging field, including most recently, Biden’s executive order on AI 

It highlights a key challenge faced by lawmakers worldwide: the rapid evolution of technology is outpacing the development of corresponding legislation and regulation. Recognizing this, the WHO’s recommendations offer a much-needed roadmap for ensuring ethical standards and patient safety in the integration of AI in healthcare, addressing concerns that have become increasingly pressing as these technologies advance and become more integrated into our daily lives. 

“WHO recognizes that artificial intelligence, its uses, and societal responses to AI, including oversight and governance, are changing rapidly and often dramatically. These are living guidelines – we are actively monitoring this space and working with sister UN agencies, national and regional regulatory bodies, civil society, industry, and Member States, to ensure we are responsive to major changes or innovations that require a change or update to the guidance,” shared Malpani. “Such updates will be done using the same process by which this guidance was developed, which is to rely on a global network of experts that provide the insights, judgment, diverse perspectives, and experience, to guide the WHO’s work.”  


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.