Lael Brainard, National Economic Advisor
Neera Tanden, Domestic Policy Advisor
Arati Prabhakar, Director of the Office of Science and Technology Policy


As President Biden has said, artificial intelligence (AI) holds tremendous promise and potential peril. In few domains is this truer than healthcare. The President has made clear, including by signing a landmark Executive Order on October 30, that the entire Biden-Harris Administration is committed to placing the highest urgency on governing the development and use of AI safely and responsibly to drive improved health outcomes for Americans while safeguarding their security and privacy.

The Administration is pulling every lever it has to advance responsible AI in health-related fields. We cannot achieve the bold vision the President has laid out for the country with U.S. government action, alone.

That’s why we are excited that in response to the Administration’s leadership, leading healthcare providers and payers have today announced voluntary commitments on the safe, secure, and trustworthy use and purchase and use of AI in healthcare. These voluntary commitments build on ongoing work by the Department of Health and Human Services (HHS), the AI Executive Order, and earlier commitments that the White House received from 15 leading AI companies to develop models responsibly. All told, 28 providers and payers have joined today’s commitments: Allina Health, Bassett Healthcare Network, Boston Children’s Hospital, Curai Health, CVS Health, Devoted Health, Duke Health, Emory Healthcare, Endeavor Health, Fairview Health Systems, Geisinger, Hackensack Meridian, HealthFirst (Florida), Houston Methodist, John Muir Health, Keck Medicine, Main Line Health, Mass General Brigham, Medical University of South Carolina Health, Oscar, OSF HealthCare, Premera Blue Cross, Rush University System for Health, Sanford Health, Tufts Medicine, UC San Diego Health, UC Davis Health, and WellSpan Health.

The commitments received today will serve to align industry action on AI around the “FAVES” principles—that AI should lead to healthcare outcomes that are Fair, Appropriate, Valid, Effective, and Safe. Under these priciples, the companies commit to inform users whenever they receive content that is largely AI-generated and not reviewed or edited by people. They will adhere to a risk management framework for using applications powered by foundation models—one by which they will monitor and address harms that applications might cause. At the same time, they pledge to investigating and developing valuable uses of AI responsibly, including developing solutions that advance health equity, expand access to care, make care affordable, coordinate care to improve outcomes, reduce clinician burnout, and otherwise improve the experience of patients.

We must remain vigilant to realize the promise of AI for improving health outcomes.  Healthcare is an essential service for all Americans, and quality care sometimes makes the difference between life and death. Without appropriate testing, risk mitigations, and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best—and dangerous at worst. Absent proper oversight, diagnoses by AI can be biased by gender or race, especially when AI is not trained on data representing the population it is being used treat. Additionally, AI’s ability to collect large volumes of data—and infer new information from disparate datapoints—could create privacy risks for patients. All these risks are vital to address.

Yet at the same time—so long as we can mitigate these risks—AI carries enormous potential to benefit patients, doctors, and hospital staff. By one estimate, AI’s broader adoption could help doctors and health care workers deliver higher-quality, more empathetic care to patients in communities across the country while cutting healthcare costs by hundreds of billions of dollars annually. It could also help patients make more informed health choices by better understanding their health conditions and needs. While widespread AI adoption throughout the healthcare sector is a long way off, it is clear, that AI has the potential to positively impact healthcare outcomes and the lives of doctors and patients in myriad ways.

Consider some examples. Each year, hospitals produce massive numbers of medical images—3.6 billion worldwide. AI is helping doctors analyze images more quickly and effectively, seeking signs of breast cancer, lung nodules, and many other conditions to reach more people with early detection than has previously been possible. Today, developing new drugs takes years and costs over $2 billion on average. AI is streamlining development with its ability to match drug targets with new molecules that can treat and cure diseases, saving time and money—and translating to cheaper, better care for patients. Clinician burnout is another big challenge. On average, for every patient they see, hospital staff must fill out over a dozen forms. New generative AI applications can extract data from patients’ medical records, populate it instantly into forms, record notes from patient sessions, and speed and improve patient communications.

To understand AI uses like these, and the risk-mitigation measures needed to realize them safely, the Biden-Harris Administration has engaged with healthcare providers, payers, academia, civil society, and other stakeholders throughout the sector. In these engagements, stakeholders have consistently stressed the need to deepen awareness and understanding of AI’s benefits and risks for healthcare—and to put questions of equity and accessibility of care front and center throughout these conversations.

Engagements like these have directly informed the Administration’s approach. In the President’s October AI Executive Order, he tasked  the Department of Health and Human Services (HHS) with a wide range of actions to advance safe, secure, and trustworthy AI. These actions include developing frameworks, policies, and potential regulatory actions for responsible AI deployment. The Order also directs HHS to launch a new program to document AI-related safety incidents, prioritize grants supporting innovation in underserved communities, and work to ensure compliance with nondiscrimination laws by AI deployers in healthcare. Actions like these build on important work already underway at HHS—such as the agency’s recent rule on transparency for AI in electronic health records and the Food and Drug Administration’s authorization of nearly 700 AI-enabled medical devices.

The private-sector commitments announced today are a critical step in our whole-of-society effort to advance AI for the health and wellbeing of Americans. These 28 providers and payers have stepped up, and we hope more will join these commitments in the weeks ahead.

###

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top