As Prepared for Delivery at the at Carnegie Endowment for International Peace


Thank you, Tino, for the opportunity to talk with this community. I greatly appreciate the work that you and your colleagues and partners have done on artificial intelligence, and I look forward to a rich discussion today.

AI is one of the most consequential technologies of our times. President Biden and Vice President Harris have been clear from the start that we must manage AI’s risks so that we can seize its benefits.

The President recently signed a landmark executive order that is the most significant action taken anywhere in the world on AI. It directs the establishment of new standards for AI safety and security. It protects Americans’ privacy and protects against discrimination. It focuses on workers and boosts innovation and competition. And it advances American leadership around the world.

The executive order is part of a broad and comprehensive strategy that the Biden-Harris Administration has pursued since Day One. Today, I would like to take a step back and talk about that strategy. It starts with understanding this complex technology and its implications for all of us.

Throughout history, people have used powerful technologies for good and for ill, and AI is no different.

Recent advances in generative AI burst on the scene over the last year with an astonishingly rapid advance in the ability to create text, images, audio, and video. And because these are ways that people communicate, generative AI has captivated millions of individuals and entered the public discourse.

It is just one example of the broad class of today’s machine-learning-based AI technologies. These are computational systems that people train on data and then use to make statistical predictions of many sorts. Our age of information offers an almost boundless variety of data types to train AI models. Generative AI is trained on language and other media. Other AI models are trained on a wide variety of sensor data, scientific data, financial data, administrative data—and all the data that’s generated as billions of people click and surf online.

That means the applications of AI are extraordinarily broad, and each one comes with a bright and a dark side. There are so many examples at work and at home:

Car designers know that AI can be used to make cars safer—and that it can also be used to lull drivers into a dangerous complacency.

Biologists know that AI can help design cures for intractable diseases—and that it can help design biothreats that are worse than what occurs in the natural world.

Banks, mortgage lenders, and hiring managers know that AI can help speed up application processing—and that it can embed discrimination in those decisions.

Workers know that AI can help them do more and earn more—and that it can be used to surveil workers and hollow out or eliminate jobs.

And in our personal lives, AI is behind every online experience, helping you find a recipe or a bargain—or a community or a mate. And we know that in the process, AI can erode the integrity of our information environment, erode our privacy, and, especially for our kids, too often it can erode their mental health.

These are the implications we can already see and anticipate. Advances in this technology have surprised us before, and they will surprise us again. Each leap in capability will bring new opportunities and, with them, new risks.

And although the AI community tends to talk about the “what”—the technology—it is really the “who” that is responsible for the advances and all of their implications. People choose to build AI models, and people choose the data to train them on. People choose what to connect these models to and what to automate. People choose how to use the resulting capabilities.

The duality of bright and dark is the nature of powerful technologies. How the story of the future unfolds depends on our deeply human choices.

Those choices are being made around the world. Every country is racing to use AI to build a future that embodies its own values.

We may disagree on other things, but none of us wants to live in a world driven by technology that is shaped by authoritarian regimes. And that is why the President has been clear that American leadership in the world today requires American leadership in AI. That means getting it right at home, and it means working closely with our allies and partners around the world.

It’s also why it was essential to start our work in the Biden-Harris Administration with great clarity about our values. Thanks to the President and Vice President’s leadership, we did exactly that, and we did it even before AI chatbots came into our lives. In October of last year, we published a “Blueprint for an AI Bill of Rights.” This is a statement about what we value as a nation: Safety. Individual privacy. Freedom from discrimination. Transparency.

We’re in choppy waters with this rapidly changing technology, and that means it’s more important than ever to steer by the light of these fundamental values.

That’s where we start. The next step is to have a clear understanding of AI’s complexities and subtleties. Because many AI systems compute statistical predictions, one issue is the quality of outputs that AI models deliver. When you put a prompt into a chatbot, it may appear to stitch together bits of information—seemingly like search results on steroids. But in fact, the AI model behind the chatbot is stitching together statistical estimates of the components of individual words that it computes have the highest probability of responding to the prompt. So good-quality training data is helpful, but it is not a guarantee of good-quality outputs.

To address this fact, responsible AI developers are building guardrails into their systems. But it is important to note that users have broken many of these guardrails within days or weeks of release. This is true of proprietary systems as well as open-source systems. So guardrails too are helpful but not a panacea. These facts make it critical that all AI developers and users know how to evaluate and validate the quality of a model. All our hopes for wrangling AI depend on robust tools and methods to establish how safe, how trustworthy, and how effective AI models are. Today, the AI community has made only the barest beginning. This period in AI technology is like medicines before clinical trials. Anyone could market a potion as a cure, but no one knew what would happen if you took it.

Another important factor to consider is that it takes fewer and fewer resources to develop highly capable AI models. When new models from OpenAI and Anthropic debuted in March, they were estimated to cost many tens of millions of dollars or over 100 million dollars to train. That cost or more was widely seen as table stakes for subsequent powerful models. But two factors combined to dramatically shrink the time and cost to get in the game.

Better engineering and more efficient algorithms mean much more training can now be done with less computation and less data. For example, MosaicML spent only $450,000 to train a model with performance similar to OpenAI’s GPT-3, which cost over $4 million to train.

As well, Meta and other companies have open sourced their advanced AI models, allowing other developers to rapidly and cheaply improve and extend those models. Now, people can create custom AI models with only a few hours of effort and off-the-shelf computers. A plethora of specific applications are now within reach for as little as $100—not $100 million, but just $100. Examples include a model tuned to answer questions about the law in India and another tuned to pass a U.S. medical licensing exam. These models are not the next frontier of core AI technology, but they are proving to be extremely capable for their specific domains.

Those interested in AI’s upside see that more people will be able to build models that solve more problems: the technology is democratizing. Those eyeing AI’s risks see greater harms from powerful technology in more hands: it is proliferating. Both are true, and policies and practices must account for the fact that AI technology is widely available and spreading fast.

AI’s harms and risks also require deliberate assessment. Catastrophic outcomes are a frequent topic of discussion. It’s important to recognize that catastrophes can play out over many time periods.

Some could play out in just weeks or months. The use of AI to perpetrate a wide-scale cyberattack on critical infrastructure or an AI-enabled bio attack could lead to societal-scale disaster relatively quickly.

In other cases, the time frame could be years. Amplified and distorted information that undermines mutual trust and undermines even the idea of truth itself may take somewhat longer to wreak its damage, but this too is catastrophic.

And harms from bias can play out over decades and lifetimes. The job not won and the mortgage not granted mean dimmer futures for parents and for their children. President Biden often says, “America is an idea…of hope and opportunity, of possibilities, of giving everyone a fair shot…We have never fully lived up to it, but we’ve never walked away from it either.” If we allow AI to be used to automate and magnify discrimination at scale, it can be catastrophic to the very idea of America. These are some of the important factors as we contend with this potent, fast-changing technology with broad applications and broad implications. Let me turn to what we are doing. President Biden and Vice President Harris have set a clear goal. It is to manage AI’s risks so that we can seize its benefits.

The Administration’s work is illuminated by extensive discussions with researchers, companies, and civil society. That includes robust exchanges in senior-level roundtables with AI CEOs, critics and experts, and civil society leaders. It includes hundreds of conversations with people investing in, building, using, and affected by AI, and it includes hundreds of responses to a Request for Information on AI priorities. We need all of these perspectives, and we know the work at hand will take all parties stepping up to play their part.

Let me outline the actions we have taken:

This spring, President Biden and Vice President Harris convened leading AI companies at the White House, calling on them to meet their responsibilities. The Vice President told them they have “an ethical, moral, and legal responsibility to ensure the safety and security” of their products. As a result of the President’s leadership, 15 leading AI companies are implementing voluntary commitments that relate to safety, security, and trustworthiness.

The Administration is also working with our allies and partners around the world on AI governance. This includes the G7 Hiroshima AI Process and the United Kingdom’s AI Safety Summit. We are also consulting with our allies and partners on next steps at the United Nations and possible initiatives to advance the global discussion on AI governance.

We are also engaged with Congress on a bipartisan basis as it considers AI legislation.

At the same time, the President and Vice President know that the executive branch must lead by example. They called on every department and agency across government to move with urgency on AI, and that’s exactly what this Administration has done. Through a series of steps, the Biden-Harris Administration is taking action on two fundamental responsibilities of government: to contain the wide variety of AI risks and harms, and to use AI in powerful and responsible ways for all public missions.

Federal agencies have announced numerous efforts. And recently, on the heels of President Biden signing a broad executive order, the Office of Management and Budget (OMB) issued proposed policy for all agencies. These amount to dozens of individual actions, reflecting the great breadth of AI applications and issues. Here are just a few examples.

To address risks to civil liberties, under the draft OMB policy, agencies will assess equity and fairness in their use of AI. For example, let’s say an agency wants to use AI to help determine who gets access to social safety net programs. Before deploying the system, this agency would be required to address algorithmic discrimination and build in an avenue for appeal. Agencies will also assess how their AI systems affect the privacy of Americans. This is how we get to more responsible use of AI by government.

To mitigate safety risks, the AI EO addresses multiple types of potential harms. For biological threats, it will establish a framework for screening the procurement of nucleic acid sequences, and will require government-funded researchers to use providers that follow this framework. It also establishes monitoring of extremely high compute usage so we have awareness when a new frontier AI model is being trained.

To address fraud and other risks to information integrity, in April four major independent enforcement agencies—the Consumer Financial Protection Bureau, the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice—reminded those they regulate that using AI doesn’t get you off the hook and that they would pursue those committing harms using AI. The AI EO also will build the capability to label and authenticate government-produced content, so the public can know what content is in fact from the U.S. government.

To support workers, the EO directs the Labor Department to develop guidelines to protect workers, including when employers use AI to monitor them, and guidelines for federal contractors to avoid bias when they use AI in their hiring processes.

To build the capacity to test and validate AI systems, the AI EO instructs the National Institute of Standards and Technology—NIST—to drive the tools and methods that all participants need to understand the safety, trustworthiness, and effectiveness of AI models.

To research and develop AI technology for public purposes, the EO initiates a pilot for a National AI Research Resource (NAIRR), a compute and data infrastructure to support AI research. As well, the National Science Foundation has already launched 25 AI research institutes that are seeding opportunities for better cybersecurity, better crops, AI-augmented learning, AI governance, and more. And DARPA has announced an AI Cyber Challenge, with prizes of up to $20 million for hackers who are the best at using AI to find cyber vulnerabilities.

To build government’s capacity to use and manage AI, the EO initiates an AI talent surge to bring all kinds of great people into public service, in concert with building the IT infrastructure that agencies need to harness AI.

That is a sampling of the work that’s underway—the work that is essential to manage AI risks and to put AI’s power to work. With strong focus and leadership from President Biden and Vice President Harris, we are making important progress on AI.

Let me wrap up by saying why we do this work. The reason to wrestle with these challenges, the reason to keep building AI capabilities is this: AI technology can help us achieve the great aspirations of our time. In fact, it’s hard to see how we achieve them without the power of AI.

AI is already part of how companies build products and services to compete around the world and create good jobs that support families here at home. It’s already part of how the Defense Department is building the next generation of military capabilities for a changing set of threats. It’s starting to open new ways for students to learn and for workers to train for new skills. It’s starting to play a role in medical diagnosis and treatment. It’s starting to help tease out the important factors in income mobility. And it’s starting to change the myriad ways in which the government provides essential services and interacts with citizens.

Much more is ahead, and you can glimpse it in research today. Promising research is using AI to predict earthquakes days to weeks in advance. They are using AI to predict extreme weather patterns, zoomed in to one kilometer, over days, weeks, and months. To translate neural signals to activate the muscles of a person who has been paralyzed. To design materials with seemingly impossible properties and promising antibiotics for antimicrobial-resistant infections.

These are the kinds of advances that open the door to a future in which we meet the climate crisis, strengthen our economy, bolster global peace and stability, achieve robust health, and open opportunities for every individual.

If we do this right, we can use AI to knock down walls and break through barriers so that we can achieve our great aspirations.

It’s an honor to work with you and the many others across the wide community who care about this great challenge.

President Biden often reminds us that, “We stand at an inflection point in history.” Artificial intelligence is part of this critical moment.

For generations before us, experts and advocates, companies and policymakers, individuals and communities have made pivotal choices about how to use powerful technologies.

Now it’s our turn. Let’s get AI right.

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top