Remarks as Prepared for Delivery
Thank you, Dr. Locascio, for that introduction and to Deputy Secretary Graves for kicking us off.
I want to recognize and thank Chair Lucas and Ranking Member Lofgren for your presence today, which says so much about how much we as a government appreciate the importance of this work. Thank you for your presence and your leadership.
I want to begin on a note of further gratitude: For the extraordinary work that has produced the AI Risk Management Framework, and for the rigorous process and thoughtful partnership that has brought us to this milestone.
I am grateful to Secretary Raimondo, for her leadership and to Undersecretary and NIST Director Laurie Locasio.
And, of course, to Elham Tabassi and her team, for their extraordinary efforts in producing this framework.
One former member of that team, Dr. Mark Latonero, is now detailed to OSTP, serving as Deputy Director of the National AI Initiative Office, working to continue our close partnership.
Almost a year ago, I had the opportunity to attend a workshop on the NIST AI Risk Management Framework and speak to the way OSTP and NIST were linking arms to accomplish our shared priorities to ensure U.S. leadership in AI research and development. That workshop was a testament to the inclusive character of this effort—a process that has lifted the voices and the values of people and companies from all backgrounds.
And it was also an example of how NIST—and the Department of Commerce—and OSTP have marched together throughout this Administration toward a shared and unified vision for technology.
We know that artificial intelligence and other automated systems are shaping almost every part of our lives: The way we work, the way we learn, how we access healthcare, and how we find a good job.
We know that data-driven tools do tremendous good, generating text or speech or images, or using data to help farmers or doctors or small businesses across the country.
The potential of these tools is extraordinary.
And yet, too often, the use of these technologies comes with serious risks. Without safeguards, AI can pose threats to good-paying jobs. Algorithms can accelerate misinformation and online harassment and harm the mental health of our children. They can be used and abused to track our communities and to limit access to fundamental opportunities.
Our Administration—like so many in industry, in Congress, and across the United States—is clear-eyed about these risks.
And President Biden has been clear about what we must do to address them:
We must act now—to protect our kids and end online hate and harassment; to defend Americans’ civil rights and ensure technology is working for all of our communities; and to support an environment where American innovation can continue making life better for everyone.
These priorities are the cornerstone of the Biden Administration’s thinking about the dynamic risks and opportunities of AI and emerging technology. And people are meeting us there.
So many who are building these technologies across America—from businesses to engineers—want to do the right thing.
Policymakers want to do the right thing, too, but need support and partnership to help shape laws and regulations to protect their constituents.
That’s the message we at OSTP have heard through extensive public engagement on these issues over the last two years—in panel discussions, public listening sessions, meetings, a formal request for information, and other outreach formats. And in conversations with so many people represented in this room.
We’ve heard from workers and high school students, business associations and scholarly associations, software engineers and researchers, civil society organization and community activists, CEOs and entrepreneurs, public servants across federal agencies, and members of the international community.
What’s clear from every engagement is that AI presents a set of challenges that is bigger and broader than any one effort or any single agency.
The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology.
That’s why OSTP has provided extensive input and insight into the RMF as it’s been under development to share perspectives and expertise on this framework, to collaborate on this vision, and lay out considerations for thinking about a wide variety of AI risks.
It’s why, at the same time, NIST was at the table as OSTP developed the Blueprint for an AI Bill of Rights, helping us set out specific practices that can be used to address one critical category of risks: the potential threats posed by AI and automated systems to the rights of the American public.
It’s why last year, the Administration launched the process to develop and updated the National AI R&D Strategic Plan to guide Federal investments in AI innovation.
It’s why this Fall, the Administration held a roundtable to lay out our core principles for tech policy—principles that are reflected in both documents, including increasing transparency about platform algorithms and stopping algorithmic discrimination.
It’s why OSTP has co-led a Federal Advisory Committee, the National AI Research Resource Task Force, that just this week released a plan that would create more opportunities for Americans from all backgrounds and disciplines to pursue AI research. Because it matters who is conducting AI research and development.
It’s why Agencies across our Administration continue to announce action on protecting the American people in an AI driven world. New initiatives to root out bias from healthcare algorithms. Efforts to require employer disclosure of certain worker surveillance, whether or not they are using advanced technologies to do so. Actions to help educators and students, patients and health-care providers, veterans, renters and home owners, consumers, families, and communities.
And it’s why—just this month—President Biden called on Congress to do its part on these issues by taking bipartisan legislative action.
Each one of these steps is part of our Administration’s integrated vision for building a cutting edge, responsible, equitable innovation future for America.
The work is too big—the technology evolving too quickly, the potential outcomes too important—for anyone to stay on the sidelines.
So now is the time for urgent action across all parts of our government and all parts of our society, using every tool at our disposal.
As we’ve heard this morning, NIST plays a unique and powerful role in that vision:
This organization holds a fascinating place in the history of the American science and technology policy enterprise. The people who created the Bureau of Standards over a century ago could never have foreseen this particular moment.
They couldn’t have dreamed up a world where our most personal and intimate information would live in a cloud, or where our ability to buy a home or get a job might be determined by a machine.
But they did understand that Americans are innovators. And they saw that change, competition and, as the President likes to say “possibilities” sit at the center of who we are. So an organization was born that could keep up with the pace of progress—to help ease the path for innovation and competitiveness by laying out standards and measurements.
The AI Risk management framework honors that legacy and advances that critical work for this era of American innovation. The framework gives us practical guidance on how to map, measure, manage, and govern AI risks. It guides us on the characteristics that can make AI safe and secure, fair and accountable, and protective of our privacy.
And critically, it moves from the technical to the sociotechnical.
I want to spend a moment on this point, because it is so important—and it’s one of the things that sets this framework apart:
The RMF—like the AI Bill of Rights—acknowledges that when it comes to AI and machine learning algorithms, we can never consider a technology outside the context of its impact on human beings.
The reality is this: Algorithms are deployed in a social context. They are used every day in ways that intersect with history and culture, law and psychology, and so many other social forces. They are built by people, used and governed by people, and they draw on the personal data of billions and billions of people.
And yet, all too often, these tools are developed without the input or the consent of the people whose lives they will touch—and without regard to the real-world risks they could pose.
That’s why it’s so important that today, NIST is taking an approach to AI systems, AI risk management, and AI standards that places people at the center. NIST has solicited information from communities who have long been an afterthought and from those across our country who have unique expertise on these issues.
Following its Congressional mandate, NIST has gone above and beyond to be inclusive, engaged, and to lift up the diverse perspectives of industry, academia, and civil society. As a result, I know that this framework will prove an invaluable resource for the companies, developers, engineers, thinkers, policymakers, and others who are building the next AI and algorithmic tools.
I am delighted to be here with you to mark this milestone moment in U.S. leadership in AI, and we at OSTP look forward to continuing to work hand in hand to advance this important work together.
Thank you and congratulations.