Explore our Topics:

What does Biden’s Executive Order on AI mean for healthcare?

The White House has issued the nation’s first sweeping Executive Order on AI to establish ground rules for privacy and security.
By admin
Nov 2, 2023, 11:17 AM

 The Biden Administration has issued the nation’s first Executive Order  related to artificial intelligence (AI) in an effort to set the tone for future deployment of generative AI, large language models (LLMs), and other machine learning techniques.  

According to officials, the Executive Order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” 

Executive Orders have the force of law in most situations, and this one is likely to affect the burgeoning healthcare AI marketplace in a big way. For example, the wide-ranging edict will require developers to share certain safety test results and other data with government agencies, comply with “rigorous standards” to be set by the National Institute of Standards and Technology (NIST) before public release, and adhere to a new slate of yet-to-be-determined cybersecurity protocols. 

“AI holds extraordinary potential for both promise and peril,” the Executive Order states. “Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”   

“Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.” 

Specific directives that may have an immediate effect on the healthcare AI ecosystem include:  

  • Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. HHS will be required to establish a safety program to accept reports of “harms or unsafe healthcare practices” involving AI. The agency will also be empowered to take action to remedy reported issues, although it is as yet unclear how those powers will be developed and deployed in the field. 
  • Protect against the risks of using AI to engineer dangerous biological materials by developing new standards for biological synthesis screening. Government agencies that fund life science projects will be asked to establish safety standards as a condition of doling out funding to bioengineering projects. This is intended to create incentives for mitigating and managing AI-associated risks in this area.
  • Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. Data brokers are key players in the healthcare and life sciences research environments, since gathering enough high-quality data to train AI algorithms is a challenge for clinical trial sponsors and analytics experts. Government efforts will focus on commercially available information, particularly datasets that may contain personally identifiable data or other sensitive assets.
  • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI. Bias and the exacerbation of healthcare inequities is a significant concern for healthcare-related AI tools. Additional guidance on how to avoid unintentional bias may help ensure that providers, health plans, life science companies, and other AI users can use new tools without fear of reinforcing unwanted negative trends. 
  • Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change. 
  • Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources. Supporting small business to commercialize AI breakthroughs will create diversity in the AI environment and prevent Big Tech from monopolizing the market. The White House will actively encourage the Federal Trade Commission (FTC) to exercise its authority in this area to create a fair and open marketplace for AI-driven tools. 

In conjunction with the Executive Order, seven leading artificial intelligence companies signed a “voluntary commitment” to manage the risks posed by AI across multiple industries. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI (steward of Chat-GPT) all agreed to “help move toward safe, secure, and transparent development of AI technology.” 

The signatories pledged to commit to internal security testing of new AI models, share necessary information with government agencies, academia, and civil society, engage in third-party vulnerability testing, and be as transparent as possible while using AI to solve important social problems. 

These leading companies also agreed to deploy mechanisms that make it easy for consumers to tell the difference between AI-generated content and human-generated content – an area of concern among both providers and patients who aren’t sure whether they fully trust AI to aid with medical decision-making. 

However, as AI becomes more deeply integrated into the clinical and operational environments, it will get harder and harder to distinguish between results or suggestions generated with the help of AI and those that use more traditional methods.   

Ultimately, the Executive Order’s new guidelines signal that government officials are taking artificial intelligence very seriously. Healthcare AI developers should soon expect an avalanche of new regulations and requirements in a market that has, until now, been compared to the Wild West. The challenge will be how to implement necessary regulation and safeguard end-users without slowing or stifling innovation.   

In addition, with multiple government agencies involved, developers and implementers may need to create new staff roles – or enhance current positions – to manage increasingly complex compliance and quality assurance needs in the near future. 

Overall, healthcare is likely to benefit from more structure and guidance around AI. With patient safety at stake with every AI-driven decision, it will be important to hold developers and end-users accountable for deploying tools that are safe, secure, unbiased, and effective at making life-or-death decisions. If federal agencies can successfully balance regulation with innovation, the AI boom will continue to offer exciting new capabilities to reduce costs, improve outcomes, and enhance experiences for all. 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.