Background Press Call on New Artificial Intelligence Announcements
Via Teleconference
6:36 P.M. EDT
MS. PATTERSON: Thanks, everyone, for hoping on today’s press call. Today, we’ll be discussing a new AI announcement from the Biden-Harris administration.
On today’s call will be [senior administration official]. After she delivers remarks, we’ll open up the call for Q&A from reporters.
As a reminder, this call is on background, attributable to a senior administration official. And the remarks in the call, as well as the embargoed factsheet that you all should have received, is embargoed until 5:00 a.m. Eastern on Thursday morning.
And with that, I’ll hand things over to [senior administration official].
SENIOR ADMINISTRATION OFFICIAL: Thank you, Robyn. Thank you all for joining. I know you all have the factsheet, so I’ll just say a few words before we get to your questions.
Artificial intelligence has been part of our lives for years. And right now, the pace of innovation is accelerating, and the applications are getting broader and broader.
As new tools hit the market, the extraordinary opportunities that AI presents are coming more into focus. But as is true with all technologies, we know there are some serious risks.
As President Biden has underscored, in order to seize the benefits of AI, we need to start by mitigating its risks. And this core principle has guided our work on AI from the beginning.
Here’s the bottom line: The Biden-Harris administration has been leading on these issues since long before these newest generative AI products.
Last fall, we released the landmark Blueprint for an AI Bill of Rights. At a time of rapid innovation, it was essential that we make clear the values we must advance and the commonsense rights we must protect. It’s an important anchor and a roadmap. And we’ve built on this foundation.
We’ve also released the AI Risk Management Framework. And with this and the Blueprint for an AI Bill of Rights, we’ve given companies and policymakers and the individuals building these technologies some clear ways that they can mitigate the risks.
We’ve also been clear that the federal government will leverage existing authorities to protect people and to hold companies accountable. Just last week, four enforcement agencies reiterated that commitment. And through executive actions and investments, we are making sure that the federal government is leading by example.
To further the President’s vision on AI, we’re making new announcements tomorrow.
First, we’re investing an additional $140 million to stand up seven new National AI Research Institutes. That will bring the total 25 National AI Research Institutes across the country, with half a billion dollars of funding to support responsible innovation that advances the public good.
Second, we’re announcing that OMB, in the coming months, will issue clear policy guidance on the use of AI by the federal government. And this will make sure that we’re responsibly leveraging AI to advance agencies’ ability to improve lives and deliver results for the American people. And it will further our efforts to lead by example in mitigating AI risks and harnessing AI opportunities.
Now, there’s a lot the federal government can do to make sure we get AI right, but we also need companies and innovators to be our partners in this work. Tech companies have a fundamental responsibility to make sure their products are safe and secure and that they protect people’s rights before they’re deployed or made public.
Tomorrow, here at the White House, we’ll be meeting with the CEOs of four companies at the forefront of AI innovation. The meeting will be led by Vice President Harris, with participation from senior White House officials, including me.
We aim to have a frank discussions about the risks we see in current and near-term AI development. We’re working with — in this meeting, we’re also aiming to underscore the importance of their role on mitigating risks and advancing responsible innovation, and we’ll discuss how we can work together to protect the American people from the potential harms of AI so that they can reap the benefits of these new technologies.
Many companies are stepping up. Tomorrow, we’re announcing that major AI companies have committed to participate in an independent, public evaluation of their AI systems at the AI Village at DEF CON 31 — this is one of the largest hacker conventions in the world.
And there, these models will be evaluated by thousands of community partners and AI experts to see how they align with the values outlined in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework.
Let me close where I started. These are important new steps to promote responsible innovation and to make sure AI improves people’s lives without putting rights and safety at risk. They build on the foundation that we have laid.
To be clear: The Biden-Harris administration is leading on responsible AI. And in the weeks and months ahead, you’ll continue to see us advance these goals.
And with that, I’m happy to take your questions.
MS. PATTERSON: Thanks so much, [senior administration official]. With that, we will open up the call for questions. If you have a question, please feel free to use the “raise hand” function on your Zoom.
And, first, we will go to Nandita Bose.
Q Hi, can you hear me?
SENIOR ADMINISTRATION OFFICIAL: Yes.
Q Thank you for doing this call. The EU — RT, as you’re aware, is trying to reshape the regulatory landscape for open AI and its competitors, and has acted way more swiftly to regulate generative AI. I’m wondering if there’ll be any discussion on potential ways to regulate AI at this meeting.
And also, should we think of this meeting as sort of a precursor to the administration asking for a legislative push on this issue? Thank you.
SENIOR ADMINISTRATION OFFICIAL: Thank you for that question. On the EU AI Act, we don’t see this as a race. In fact, we’re working closely with our EU counterparts through the U.S.-EU Trade and Technology Council. You know, what we’re working on is advancing approaches to AI that serve our people in responsible and equitable and beneficial ways.
And the work that we’re doing here at home is currently to take this comprehensive approach to harness AI benefits and mitigate the risks. And the President, of course, has also been very clear about the need for Congress to step up and act on a bipartisan basis to hold tech companies accountable, including for algorithmic discrimination.
And, you know, mitigating these harms that are present in these technologies today is, of course, also going to provide the basis for effective responses for what’s likely to be much more powerful technologies in the future ahead. And, you know, again, that’s why we think it’s so important that the companies take seriously this fundamental responsibility to make sure their systems are trustworthy and safe and secure before they’re released or deployed.
MS. PATTERSON: Thanks for the question. Next, we’ll go to Josh Boak.
Q Thanks so much for doing this. I wanted to dovetail on what Nandita was asking. Do you believe that international standards need to be set regarding AI, given that its reach goes far beyond the U.S. and Europe? And what’s the process by which you achieve that, given today’s announcement — or given tomorrow’s announcement?
MS. PATTERSON: [Senior administration official], you may still be (inaudible).
SENIOR ADMINISTRATION OFFICIAL: No, thank you. Thank you. Thanks for the question. Look, I mean, I think this is — we need to — we need to take an approach that is — you know, we’re clear about what we’re focusing on. And we need to take one step at a time.
This is clearly a global technology. And that cooperation that I mentioned with our EU colleagues is going to be an essential part of where we need to go.
MS. PATTERSON: Thanks for your question, Josh. Our next question will come from Cat Zakrzewski.
Q Thank you so much. I just wanted to ask you — you know, you talked about the need for the companies to ensure that they are responsibly releasing products and checking the safety of those products. Does the Biden administration trust these companies to do that proactively given the history that we’ve seen in Silicon Valley with other technologies like social media?
SENIOR ADMINISTRATION OFFICIAL: Thank you for the question. The broad implications of this new generation of AI is going to call on — it’s going to demand responsible behavior from all parties. And it’s — you know, clearly there will be things that we are doing and will continue to do in government, but we do think that these companies have an important responsibility. And many of them have spoken to their responsibilities. And, you know, part of what we want to do is make sure we have a conversation about how they’re going to fulfill those pledges.
MS. PATTERSON: Thanks for question, Cat. Our next question will come from David McCabe.
And, David, you should be able to unmute yourself. Not sure if we still have David. Oh, there you are.
Q Can you hear me? Great. So, [senior administration official], I have a quick logistical question, which is just two things. This is the first meeting like this with CEOs of these companies in the kind of current AI boom, right? And then the second is just: When do you expect to read the meeting out?
And then a more substantive question for the call, which is, like: When you look at — when the administration looks at generative AI and AI more generally, like what would be a comparable historic compar- — what would be a historic comparison, right? A technological development that you think is of equal import to artificial intelligence when you sort of think about the impact here.
SENIOR ADMINISTRATION OFFICIAL: I can take the first two questions.
The first one is just: In terms of logistics, we will — we’re working to get — we will work to get a readout out tomorrow afternoon, soon after the meeting concludes.
And then, as to the first question about whether this is the first meeting with these CEOs: I believe it is, but let me double check and get back to you.
SENIOR ADMINISTRATION OFFICIAL: Yeah, let me address that. We — as you might imagine, in the work that we’ve been doing, we are in constant contact not only with the many companies who are working on AI technologies, but also with nonprofits and academic experts. We really need all those different perspectives, and so that’s been an active part of our engagement throughout.
MS. PATTERSON: Thanks, David. Our next question comes from Kevin Collier.
Q Hi. Thanks for doing this. It feels like there’s a lot of different parts of the government that have different concerns about AI, which itself is a very broad term. But I’m curious about AI threats — ways it can go haywire; it can be used for hacking, disinformation, exploiting critical infrastructure. Is that your concern? Is that something that is going to come up in this meeting? Is this the NSC’s purview? How do you think about it?
SENIOR ADMINISTRATION OFFICIAL: Well, the starting point is very much our — you know, our North Star here is this idea that if we’re going to seize these benefits, we have to start by managing the risks. The risks are very — they’re quite diverse, as you might imagine, for a technology that has so many different applications.
I’ll mention some of the primary categories. One is safety and security risks — everything from autonomous vehicles to cybersecurity risks and beyond. There are risks to civil rights, such as bias that’s embedded in housing or employment decisions; risks to privacy, such as enabling real-time surveillance; risks to trust in democracy, such as the risks from deep fake; and, of course, risks to jobs and the economy, thinking about job displacement from automation now coming into fields that we previously thought were immune. So it’s a very broad set of risks that need to be grappled with.
MS. PATTERSON: I appreciate the question. Our next question comes from Mohar Chatterjee.
Q Hi. Oh, I am unmuted. I have actually two quick questions. One is about the NAIR Institutes. I’m wondering how you’re practically seeing that interplay between research coming out of those National AI Research Institutes, how that will actually feed into federal agency efforts to kind of achieve that vision that was laid out in the White House Bill of Rights. That’s question one.
Question two is just: The O- — you mentioned the OMB as part of the factsheet. I know that the Biden administration has, like, billions in AI asks right now. And I’m wondering how those budget negotiations are proceeding, if at all.
SENIOR ADMINISTRATION OFFICIAL: You know, I — thanks for — I think those are great questions. The place to start is to recognize that this is a technology that has many, many, many different applications and that — well, that’s true commercially, and it’s equally true in terms of public purposes. So you — I think it’s exactly right. But you will see it embedded in lots of different parts of what the government does.
Specifically to your question about the national AI research resource: The purpose of that work that the advisory committee — the task force came back with is to provide the data and the compute infrastructure that’s so essential to build these systems. So that’s the foundational piece of infrastructure that — that the taskforce has recommended.
MS. PATTERSON: Our next question comes from Natalie Alms.
Q Hey, guys. Thank you so much for hosting this. I appreciate it. So I wanted to ask a little bit about the guidance that you guys are previewing for the federal government itself and federal agencies. Could you give us any more details on what government agencies should expect? And will there be any similar sorts of evaluations or assessments of AI that government is already using, similar to what you sort of previewed for the private sector?
SENIOR ADMINISTRATION OFFICIAL: Yes, I — one of the things I mentioned was that OMB would be forthcoming with guidance across the federal government. That’s not — that’s — what we’re announcing tomorrow is that that’s coming down the pike. But it’ll come out for comment in the summer. And I think you’ll see the more comprehensive answer to that there.
And, you know, again, we’re just going to keep coming back to this North Star of making sure that we’re mitigating the risks while we’re reaching for the benefits.
MS. PATTERSON: Our next question will come from Jory Heckman.
Q Hi, thanks for doing this, and thanks for taking my question. Going back to the OMB guidance that we’ll see later this summer, what’s your message to federal employees who may be wary of this new era of generative AI?
And you said earlier about, you know, job displacement and what that could mean for the workforce more broadly. In terms of the federal workforce and agencies getting more and more comfortable with using this technology, you know, what’s your message to them?
SENIOR ADMINISTRATION OFFICIAL: I think, to me, the message keeps coming back to understanding that this is a very powerful technology. And I think for federal workers in particular, it’s an opportunity to show how serving the public can be a place to lead on using AI wisely and responsibly.
There are so many public missions that the government does for which AI can be enormously beneficial. But the whole ballgame is going to be how it’s implemented. And it’s really on the shoulders of these federal employees, and I’m sure they’re going to step up to it.
Q And then just one more on the OMB memo here — or the OMB guidance, rather, that’s coming up. In terms of the other documents you described — the AI Bill of Rights, other policies that the administration has put out on this — are there any areas of concern that the administration is looking to underscore in all of this or hasn’t addressed before that this upcoming guidance is really particularly looking to address?
SENIOR ADMINISTRATION OFFICIAL: The OMB guidance, you know, is going to be broad across government. I think, you know, much of — a lot of the issue with responsible AI comes right back to the executive order that the President signed in February — his equity executive order — because at the end of the day, if you’re going to use this powerful technology, you have to do it in a way that protects rights and protects safety. So I think that’s really what it keeps — you know, that’s the foundation of this whole — the whole thing here.
MS. PATTERSON: And our next question will come from Brittany Trang.
Q Hi. I was wondering if this has any impact on how individual federal agencies regulate AI. I’m thinking in particular of the FDA that’s already had to deal with this issue in AI-enabled medical devices. Does this have any impact on that?
SENIOR ADMINISTRATION OFFICIAL: Yeah, thank you for the question. I don’t want to get ahead of the guidance and the work that’s being done. But I think you really highlighted the fact that AI is coming into virtually every part of public missions. And so I — you know, I think it is going to be important for people to step up across the board.
MS. PATTERSON: Our next question comes from Mona Austin.
Q Hi, can you hear me?
MS. PATTERSON: We can hear you great.
Q Hi. Are there any stakeholders from the intellectual property community? Because there are enormous implications about rights and ownership across the board when it comes to artificial intelligence. That’s the one question.
And then you talked earlier about broad application of how AI can be utilized. And I’m just wondering if we are strictly talking about the technological aspects. Because there — as you are aware, there’s so many layers to how this is being used, and it’s mainly technological, but there are other aspects of it as well. So I’m wondering how broad your conversations have been or will be.
SENIOR ADMINISTRATION OFFICIAL: Yeah, first, thank you. To your point about intellectual property, we’re very aware of that as one of the many issues. And it’s — it’s part — that’s very much part of the community that we — we’re talking with and engaged with.
And I can’t emphasize too strongly — I think part of the reason it’s so important that President Biden stepped up to lead in this area is simply because of the breadth of the applications here. It’s going to affect so many different aspects of Americans’ lives. And I think that really underscores why we feel that this work has to be done in a way that does take us into the — into the best future we can possibly craft.
MS. PATTERSON: And our last question will come from Justin Sink.
Q Hey, guys. Hopefully you can hear this. I just wanted to ask about the company commitments real quick. Just to be clear, the evaluation, will it just be from attendees of DEF CON? I wasn’t sure what you meant by “community partners.”
And then is it just going to be open for the few days in August? Or is it going to be something ongoing? So when new products are rolled out or existing ones evolve, you know, is there going to be some sort of continuing platform that that evaluation can go on going forward? Or is this just kind of a one-shot snapshot?
SENIOR ADMINISTRATION OFFICIAL: Thank you for that question. I think this is going to be an important first step. This is the first public assessment of multiple large language models. And it’s going to be done in a way that responsibly discloses any of the issues that are discovered to the companies to let them mitigate those issues.
Of course, what we’re drawing on here, red-teaming has been really helpful and very successful in cybersecurity for identifying vulnerabilities. That’s what we’re now working to adapt for large language models. So I think it’s an important step, and there’ll be more details about it as the event draws closer.
MS. PATTERSON: Thanks, Justin. And thanks, everyone, for hopping on. I know we weren’t able to get to all the questions, but please feel free to shoot me a note, or Subhan from OSTP, with any additional questions, and we’re happy to get you the info that you need.
As a reminder, today’s call and the factsheet are embargoed until 5:00 a.m. Thursday morning. And then, logistically, for tomorrow’s meeting, it will be closed press, but we’ll be releasing a readout tomorrow afternoon after it happens.
Thanks, everyone, for hopping on.
6:59 P.M. EDT