1:18 P.M. EDT
THE PRESIDENT: I’m the AI. (Laughter.) If any of you think I’m Abe Lincoln, blame it on the AI.
First of all, thanks. Thanks for coming. And I want to thank my colleagues here for taking the time to come back again and again as we try to deal with the — we’re joined by leaders of seven American companies who are driving innovation in artificial intelligence. And it is astounding.
Artificial intelligence or — it promises an enormous — an enormous promise of both risk to our society and our economy and our national security, but also incredible opportunities — incredible opportunities.
Just two months ago, Kamala and I met with these leaders — most of them are here again — to underscore the responsibility of making sure that products that they are producing are safe and — and making them public — what they are and what they aren’t.
Since then, I’ve met with some of America’s top minds in technology to hear the range of perspectives and possibilities and risk of AI.
Kamala can’t be here because she’s traveling to Florida, but she’s met with civil society leaders to hear their concerns about the impacts on society and ways to protect the rights of Americans.
Over the past year, my administration has taken action to guide responsible innovation.
Last October, we introduced a first-of-its-kind AI Bill of Rights.
In February, I signed an executive order to direct agencies to protect the public from algorithms that discriminate.
In May, we unveiled a new strategy to establish seven new AI research institutes to help drive breakthroughs in responsible AI
And today, I’m pleased to announce that these seven companies have agreed volun- — to voluntary commitments for responsible innovation. These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust.
First, the companies have an obligation to make sure their technology is safe before releasing it to the public. That means testing the capabilities of their systems, assessing their potential risk, and making the results of these assessments public.
Second, companies must prioritize the security of their systems by safeguarding their models against cyber threats and managing the risks to our national security and sharing the best practices and industry standards that are — that are necessary.
Third, the companies have a duty to earn the people’s trust and empower users to make informed decisions — labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm.
And finally, companies have agreed to find ways for AI to help meet society’s greatest challenges — from cancer to climate change — and invest in education and new jobs to help students and workers prosper from the opportunities, and there are enormous opportunities of AI.
These commitments are real, and they’re concrete. They’re going to help fulfill — the industry fulfill its fundamental obligation to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.
Let me close with this. We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation to me, quite frankly. Artificial intelligence is going to transform the lives of people around the world.
The group here will be critical in shepherding that innovation with responsibility and safety by design to earn the trust of Americans. And, quite frankly, as I met with world leaders, all — all — all our Eur- — all the G7 is focusing on the same thing.
Social media has shown us the harm that powerful technology can do without the right safeguards in place.
And I’ve said at the State of the Union that Congress needs to pass bipartisan legislation to impose strict limits on personal data collection, ban targeted advertisements to kids, require companies to put health and safety first.
But we must be clear-eyed and vigilant about the threats emerging — of emerging technologies that can pose — don’t have to, but can pose — to our democracy and our values.
Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs and industries.
These commitments — these commitments are a promising step, but the — we have a lot more work to do together.
Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight.
In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation. And we’re going to work with both parties to develop appropriate legislation and regulation. I’m pleased that Leader Schumer and Leader Jeffries and others in the Congress are making this a top bipartisan priority.
As we advance the agenda here at home, we’ll lead the work with — we’ll lead work with our allies and partners on a common international framework to govern the development of AI.
I think these leaders and — I thank these leaders that are in the room with me today — (clears throat) — and their partnership — excuse me — and their commitments that they’re making. This is a serious responsibility, and we have to get it right. And there’s enormous, enormous potential upside as well.
So I want to thank you all. And they’re about to go down to a meeting, which I’ll catch up with them later.
So thank you, thank you, thank you.
1:24 P.M. EDT