As Prepared for Delivery in Washington, D.C.


Thank you, Elham [Tabassi], for that warm introduction. And thank you to NIST, and its home agency, the Commerce Department — where we at the White House Office of Science and Technology Policy, OSTP, have great partners in Secretary Raimondo and Deputy Secretary Graves. So thank you very much for inviting me.

Congratulations to Elham, to you and your team at the Information Technology Laboratory — and all of you who’ve supported this effort — on this milestone achievement. I want to commend you for developing such a remarkable draft AI Risk Management Framework. And let me be the first to encourage everyone to get your comments in before the deadline, which is one month from today.

The process used to develop this framework, and the AI Risk Management Framework itself, together represent a historic moment for this 120-year-old institution, known today as the National Institute of Standards and Technology.

When the National Bureau of Standards was born over a century ago, its creators could’ve hardly imagined the technologies we’re using and discussing today. They were busy determining the proper standards to measure basic things, like length and mass, temperature and time, light and electricity — a nascent but growing industry at the time.

I know they would be proud of how NIST has stayed true to its Constitutional mandate — to “fix the standard of weights and measures,” as it says in Article 1, Section 8.

But I believe they would also take pride in your leadership, as we’ve collectively recognized that this is a new and different moment for standards and measurement — a moment that compels us to think about this work in new and different ways.

Today, machines don’t just measure the time and temperature. We have engineered them to do complex decision-making that humans used to do that determined access to benefits in our society – or denial of rights.

We no longer just demand equity from each other. We demand it from our machines and in how we collect and deploy the kinds of data this agency was formed to govern.

For example, the Biden-Harris Administration has prioritized measurement for equity assessment, as exemplified in the work of the Equitable Data Working Group, that was established as part of the Day 1 Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.

I have been honored to co-chair this effort over the last year. And in so doing to think and work with others across government and civil society about how data and measurement can account for the people who are involved — and can benefit them.

Thinking in new and different ways about measurement, as the AI Risk Management Framework does, means not only employing best-in-class technology, but also attending to how these methods and approaches might be used, and taking on the challenge of applying standards to dynamic systems.

This means introducing new concepts like “socio-technical” issues into the work of setting standards and measures.

This means developing an approach that mitigates bias in artificial intelligence.

It means recognizing, as you have, that after the AI turn — that is, after AI became a dominant part of technology deployment — “standards” are not purely technical or statistical.

It’s not just about the data and the algorithms.

It’s also about the human and societal factors — how AI systems are used by people in the real world.

And how their development and use can reflect, amplify biases that are personal, societal, and historical — biases that can be mitigated only if attention is paid to design parameters and use cases.

As you all know, the Risk Management Framework doesn’t limit itself to risk factors based on traditional measures of efficacy in AI systems — it doesn’t just take up the crucial issue of unrepresentative datasets.

The Framework also acknowledges that sociotechnical characteristics of a system — which are, as it says, “inextricably tied to human social and organizational behavior” — are equally important in evaluating the overall risk of a system. And that these characteristics involve human judgement that cannot be reduced to a single threshold or metric.

Moreover, the Framework also recognizes that the problem of bias in AI, is multifaceted. Indeed, this challenge involves human, systemic, and computational aspects that are so complex, NIST devoted an entirely separate document to synthesizing the extensive scholarship on this topic, to serve as a basis for the Framework’s bias mitigation.

I know how big an innovation this is for NIST — and I cannot understate how important this is for the country.

The sociotechnical standards you set, will be followed by the U.S. government and the private sector. They will shape the future of the world.

And for cross-cutting technologies like AI — which are increasingly impacting so many aspects of our lives — that could not matter more.

As you know, managing the risks posed by artificial intelligence and other automated technologies is a high priority for us at OSTP.

Starting with an op-ed in Wired last October, we’ve embarked on a policy planning process — which we conceived as a Bill of Rights for an Automated Society — to ensure technology is developed in ways that promote equity, privacy, and individual liberty.

One thing I’m particularly proud of is how we’ve focused on technology’s impact on people — and how and where this impact happens — and not just on the technology and its design.

Because, while the design of technology matters, so does where, how, by, and about whom the technology is used.

Since we began our process last fall:

We’ve held public panels with experts — from civil society, the private sector, the academy, community groups, and the research community — to learn more about the impacts of automated systems and how reasonable guardrails can be implemented.

We’ve gotten written feedback from a variety of groups about biometrics and conducted open listening sessions with the public to learn more.

We met with stakeholders from industry, civil society, and government. We’ve met with pioneering researchers in the field of artificial intelligence who provided important expertise and experience, and high school students with remarkably mature and nuanced opinions about how automated systems should be used in their lives.

Throughout this public engagement phase, we’ve put a priority on bringing to the table people and communities who are rarely in the room where devices are programmed, algorithms are written, and vast amounts of data are collected — but who live with the consequences nonetheless.

We’ve heard from a wide array of groups. Yet, what we’ve been told is strikingly consistent.

Time and again, we are hearing about people who have fallen victim to discrimination at the hands of artificial intelligence and other automated technologies:

Immigrants and caregivers who have been automatically sorted out of job applicant pools — with no way to seek recourse.

Story after story of Black men arrested and jailed after being misidentified by facial recognition tools.

People of color and working-class folks whose applications to rent apartments or get a home mortgage have languished in cyberspace — or been rejected without any human consideration.

On a related note, just last week Vice President Harris joined the Secretary of Housing and Urban Development, Marcia Fudge, to announce an interagency task force on property appraisal valuation equity, to help address among other issues the problem of algorithmic bias in home valuations and appraisals.

As we’ve seen all too often, injustices can be baked into these technologies, by negligence — or even by design.

As a reaction to these harms — and this is consistent with the Risk Management Framework’s guiding principles of “fairness, accountability, and transparency” — we heard loud and clear that people want guardrails built into automated systems to protect their rights. And that’s what we are focused on now.

Many of the guardrails we need are already best practices within software development, even if they aren’t always followed: Things like making sure a product’s design includes testing by people who will be using the software, and that a model is tested with data that resembles how it will be used. And making sure a product is monitored after it’s deployed, with associated organizational processes to ensure ongoing oversight — as the Risk Management Framework highlights.

  • There are also newer practices that haven’t yet been consistently incorporated as protections for these technologies, such as:
  • more rigorous testing to protect people from discriminatory impacts of technologies;
  • ensuring meaningful human oversight over life-impacting decisions in the criminal legal system;
  • making sure to consult with the public before introducing new technologies into communities;
  • and for critical or high-risk technologies like certain uses of AI, making sure there’s always a human fallback — an easy way to get help from a person, the equivalent of pressing zero on your phone to reach an operator.

Throughout this work, we’ve been squarely focused on protecting people from negative impacts of technologies.

If a technology meaningfully impacts people’s fundamental rights, opportunities, or access to vital needs, it should have these guardrails enacted.

Let’s envision a world where we can expect these systems to be safe, not harmful.

A world where the use of algorithms does not place the American public at risk.

A world where our privacy is respected in accordance with our wishes.

A world where we know if an automated system is being used, and why it comes to its conclusions.

A world where we will be treated as people, and be able to deal with a person if need be.

To make that vision real, it’s going to take all of us working together, bridging the social and technical, linking policy to action.

So over the next three days, I’m calling on each and every one of you to focus on the hard, but vitally necessary work, of transforming principles into practices.

Because, let’s be honest: in the space of tech policy we’ve been talking about AI principles for years now.

It’s time to start putting them into action.

This will require levers like the AI Risk Management Framework — and its great unfinished “Part 3,” the Practice Guide, that will make this framework easier to apply.

To fully realize this vision, we know the work ahead will require considering and using all types of tools: from agency actions and federal procurement requirements, to best practices, voluntary industry actions, and yes, broadly-adopted standards, from NIST and others.

But I’m glad to say that none of us are in this alone.

When Congress created the Office of Science and Technology Policy almost 50 years ago, they wrote some prescient words into our founding statute, which reads: “while maximizing the beneficial consequences of technology, the Government should act to minimize foreseeable injurious consequences.”

That’s what this work is all about.

I know it is often arduous — but it also gets to the heart of who we are as a nation.

If I leave you with one thought today, I would want it to be that we can maximize the beneficial – not just minimize the injurious — as we develop the specific practices that will govern the consequences of the wondrous technology we have invented.

From its founding, America has been a work in progress — aspiring to values, recognizing that this aim leaves some people behind, and working to fix that.

It’s also been a collective endeavor — a process we the people undertake together.

That’s what motivates this work — and OSTP’s mission — and it’s what you’ll be doing over the next three days.

I’m so grateful, to each of you, that you’re here to do so. Thank you.

###

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top