NAM announces AI code of conduct project
The National Academy of Medicine (NAM) has brought together leaders from healthcare, Big Tech, and academia to create a code of conduct for the use of artificial intelligence in healthcare. The Artificial Intelligence Code of Conduct (AICC) project aims to establish a shared guiding framework for the development and application of artificial intelligence tools, including generative AI and large language models (LLMs) such as ChatGPT, in clinical care, life sciences, and other health-related areas.
The goal of the three-year project is to “clarify roles and responsibilities of the many stakeholders on issues of privacy, ethics, equity, accountability, and applicability, at each stage of the AI lifecycle,” NAM said in a press release. “The Code of Conduct will represent a ‘best practice’ framework, subject to testing, validation, and improvement as the technology and the ability to effectively govern it progresses.”
NAM has recruited almost two dozen prominent clinicians, researchers, and healthcare thought leaders from a wide array of technology and life sciences companies, academic medical centers, and other care providers for its AICC Steering Committee. Committee members hail from such notable organizations as Optum, Mayo Clinic, Microsoft, Google, Royal Philips, Memorial Sloan Kettering, and Harvard Medical School, among others.
“Involving these accomplished national leaders from across the US is essential for creation of a harmonized, broadly adopted AI Code of Conduct, as well as for development of the national architecture that promotes the equitable and responsible use of AI,” said Dr. Michael McGinnis, a senior leader at NAM. “This collaborative effort will help ensure that the application of health AI is based on the best science, and is consistent with ethical principles and societal values in pursuit of effectiveness, efficiency, and equity for all members of society.”
Over the next three years, the AICC Steering Committee will work with industry stakeholders and the public to develop actionable guidance for the future of AI. Committee members will present a series of webinars throughout the project period highlighting their milestones and collecting feedback. Final recommendations and action items will be published in a series of papers by the end of the project term.
AICC will be closely intertwined with the work of the Coalition for Healthcare AI (CHAI), which recently released its own take on the bedrock principles for creating trusted AI in healthcare with minimal built-in bias. There is some degree of overlap in the founding members of CHAI and the members of the AICC Steering Committee, although CHAI has a much heavier governmental presence due to a number of agency observers.
The two organizations will use their resources to inform each other’s work and collaboratively chart a path forward for AI in clinical care and life sciences research.
“Throughout the course of the project, the NAM effort will inform the efforts of CHAI, which is providing robust best practice technical guidance, including assurance labs and implementation guides to enable clinical systems to apply the Code of Conduct,” the AICC website explains. “Similarly, the efforts of the CHAI and other groups addressing responsible AI will inform and clarify areas that will need to be addressed in the NAM Code of Conduct. The work and final deliverables of these projects are mutually reinforcing and coordinated to ensure broad adoption and active implementation of the AICC in the field.”
Such collaborative efforts will be crucial as AI continues to rapidly evolve across healthcare and other sectors. Like many new technologies that set industry leaders abuzz with possibilities, it will be important to take a measured and objective approach to understanding the potential – and the limitations – of these tools, especially in such a high risk, highly regulated environment as healthcare.
Despite enthusiasm from tech developers and many thought leaders, AI is still being viewed with skepticism by front-line providers and by their patients, neither of whom wholly trust the first-generation technologies that are currently creeping their way into the care environment.
These types of industry-wide efforts to establish guardrails and encourage responsible adoption will be necessary to overcome trust barriers and develop technology tools that augment high-quality clinical care without reinforcing biases or creating safety risks for patients.
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at email@example.com.