Explore our Topics:

WHO weighs in on global need for AI regulation

The WHO calls for artificial intelligence regulation on a global scale as AI makes its way into healthcare.
By admin
Oct 25, 2023, 3:24 PM

The World Health Organization (WHO) believes that global leaders need to take a unified approach to artificial intelligence regulation – and fast. The speedy rise of AI technologies in the healthcare ecosystem, such as large language models (LLMs) and generative AI, could put patients’ wellbeing at stake if these tools are not developed and deployed correctly, the organization stated in a new 80-page brief. 

“WHO – along with many international and regional organizations and national authorities – recognizes the potential of AI in accelerating the digital transformation of health care,” wrote Dr. Jeremy Farrar, Chief Scientist, World Health Organization. “AI has an evident potential to strengthen health service delivery to underserved populations, enhance public health surveillance, advance health research and the development of medicines, support health systems management and enable clinical professionals to improve patient care and perform complex medical diagnoses.”  

“However, existing and emerging AI technologies, including large language models, are being rapidly deployed without a full understanding of how such AI systems may perform – potentially either benefitting or harming end-users, including health-care professionals and patients.”  

To avoid situations in which AI could conceivably produce harm to humans, WHO has established a series of workgroups designed to identify and address the challenges of bringing AI into such a high-risk, high-reward environment. The experts have highlighted six key areas where regulation can have the biggest impact on ensuring that AI is safe, secure, accessible, and equitable for all populations. 

  • Documentation and transparency: AI requires huge amounts of training data and complex programming structures, both which could produce unintentional bias if developed incorrectly. Datasets, reference standards, parameters and metrics, changes from original plans, and development updates must be carefully documented and tracked to ensure that models are as transparent and explicable as possible.
  • Risk management and AI systems development lifecycle approaches: Developers and adopters should consider the total product lifestyle when engaging with AI models, including pre-market development, post-market surveillance, and change management to integrate AI into workflows. WHO believes that taking a risk management approach to AI implementation will be essential for addressing potential risks, such as cybersecurity threats and algorithmic bias. 
  • Intended use and analytical/clinical validation: In addition to carefully documenting the steps taken to develop a model, AI algorithms need to be clinically and analytically validated before use in real-world healthcare settings. Testers should be made fully aware of the composition of training datasets to provide insight into potential biases or errors, and developers should consider performing external analytical validation using independent datasets before rolling out new products.While randomized controlled clinical trials would be ideal for evaluating comparative clinical performance of AI and existing standards of care, trials are often too slow to keep pace with changes in the field. When trials are not the best option, developers, adopters, and regulatory agencies can consider prospective validation in real-world environments or implementation trials with relevant comparators alongside robust post-deployment monitoring and surveillance.  
  • Data quality: Bias and errors are huge concerns in the healthcare setting, as either can produce unintended consequences that range from exacerbating existing inequities to directly causing patient mortality. The quality of data used to train and run AI algorithms must be complete, balanced, accurate, current, and relevant at all times. WHO suggests a combination of careful design and prompt troubleshooting to identify data quality issues as early as possible to prevent or mitigate harm. Both those who generate data and those who use it will need to collaborate with regulators to create high quality, fit-for-purpose datasets that are suitable for specific use cases. 
  • Privacy and data protection: There are several major data privacy and security frameworks in place that guide regional and international health IT activities, but none are fully prepared to deal with the complexities of widespread AI adoption. Developers will need to work closely with international regulators to understand and address the challenges of privacy and data protection in an AI driven world. In the meantime, developers and users should consider implementing compliance programs that address privacy and security risks in the current enforcement environment.
  • Engagement and collaboration: AI shouldn’t be created or used in a vacuum. Collaboration between technology companies, care providers, payers, regulators, and patients will be vital for creating AI tools that truly solve important problems. WHO stresses that these partners should work together to architect a streamlined but effective regulatory environment that appropriately guides the evolution of AI without inhibiting the advances that these strategies can bring.  

While the report is not intended to serve as a specific policy or regulatory document, it does provide some food for thought about what to consider as AI becomes a mainstay of the healthcare environment.  

“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General, in a press release. “This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimizing the risks.”  


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.