As prepared for delivery in Chicago, Illinois


Thank you, Jumana [Musa], for that warm introduction. It’s great to be in Chicago.

And thank you to the NACDL, and the UC Berkeley Samuelson Law, Technology, and Public Policy Clinic, for prioritizing this important conversation.

The issues you’re discussing this week are so important and salient.

Automated technologies that barely existed a decade ago, are now widely used throughout the United States — including, as you know, in the criminal justice system.

And all of us need to be better prepared to navigate this increasingly automated world.

I also want to thank the technology and legal experts here with us. Your work on studying, publicizing, and dismantling discrimination in automated systems is critical to preventing the risks posed by artificial intelligence and other data-driven technologies. Thank you all for sharing your knowledge.

Finally, I want to commend all of you: the criminal defense lawyers who have taken the time to attend this seminar. It cannot be said often enough, that justice depends on you. And I’m grateful that you’re here today.

As you just heard, until recently I spent my life in the academy: studying the intersection of science, technology, and our social fabric — investigating how science shapes the world we inhabit and how societal context shapes science and technology — in the past, and in the present, and for the future.

That work, those experiences, have forged the perspective that I bring to public service: the conviction that science and technology must be a tool to make lives better, safer, and fairer. That innovation must open the door to a better future for all people.

It was because of these values that I came to the White House Office of Science and Technology Policy, or OSTP — to lead its first division of science and society, and to uplift civil rights and democratic values as we work to ensure that science and technology benefits everyone.

These priorities guide me and the work of OSTP, as we pursue our mission of maximizing the benefits of science and technology to advance health, prosperity, security, environmental quality, and justice for all.

And over the past 16 months, as I’ve served in the halls of government, I’ve witnessed these values come to life.

I’ve seen that democracy and justice, like science and research, are a process — never quite realized, never quite finished, but always striving to perfect themselves — pulling us ever closer to the ideals we aspire to in the future.

We, at OSTP, are a team of futurists.

The Office of Science and Technology Policy, by mandate, is a future-facing office — tasked by Congress almost 50 years ago, with “maximizing the beneficial consequences of technology” while also acting “to minimize the foreseeable injurious consequences.”

And over the last year and a half, President Biden has charged us with harnessing the power and possibilities of science and technology — not just for today, but for tomorrow — and for the decades to come.

That future orientation is what makes our office different from any other at the White House.

It’s why, at President Biden’s direction, we’re working on ending cancer as we know it — why we’re laying the foundation for a society that runs on clean, renewable energy — and why we’re preparing for the next pandemic.

Our job is to meet the challenges of today, while reimagining what our nation can and should look like in the future — to reimagine how we tackle challenges wrought by disease, a changing climate, and powerful new technologies that human beings invented but don’t yet fully understand — challenges that are, at times, so urgent and so dire that the status quo simply cannot stand.

I’m here today to talk to you about this challenge: Data that doesn’t accurately represent us. Algorithms that are unchecked. And artificial intelligence that violates our social bonds.

This challenge — that affects all of us, all communities, every day — carries the same urgency, the same relevance, as the other challenges we work on at OSTP. And that’s why it’s a top priority for us.

I’m here to affirm what all of us in this room know too well: that without a fundamental transformation in our relationship with these automated technologies, we the people will never live up to our highest ideals — and never achieve equal opportunity, equal access, or the promise to which we’ve long aspired: equal justice under law.

To achieve the fundamental transformation we need, we have to start with being explicit about the society that we want to create for ourselves and our posterity:

A society that is safe and equitable, innovative and just. A nation that promotes it highest ideals of individual liberty, privacy, fairness, accountability, and transparency.

Science and technology can help us build that future — but they can also stand in the way.

We have powerful technologies, from artificial intelligence to genomics — tools that can help us answer important questions about who we are and where we come from. Tools that are right at our fingertips.

It is our collective responsibility to understand how this technology works.

As you’ve heard today from Cathy O’Neil, Rebecca Wexler, and other experts, we need to open the black box, and demystify what happens between system design and system use.

And we need to listen to the people whose lives are shaped every day by these new technologies.

Over the last nine months, OSTP sought input from people across the country about the promises and potential harms of automated technologies.

We got written feedback from a variety of groups — including multiple legal defense groups, like NACDL — about biometrics, and we conducted open online listening sessions with the public.

We heard from pioneering AI researchers, who provided important expertise and perspective.

We met with activists and advocates, who care deeply about making our society more fair and just.

We connected with thoughtful high school students, who shared nuanced opinions about how facial recognition technologies should and shouldn’t be used in their lives.

All of our engagements elucidated how AI and other data-driven technologies are reshaping nearly every part of our society, transforming the world around us in ways both obvious and not.

The ways AI now drives important decisions across sectors like transportation and agriculture.

The power of data-gathering tools to unlock amazing advancements — like computers that learn to translate languages, algorithms that identify the likelihood of cancers in patients, and even some legitimate uses by law enforcement.

At the same time, we’ve also seen the other side of these tools:

The ways these technologies have caused significant harms — breaches of the American public’s rights, opportunities, and access to critical needs or services.

This problem has manifested across many contexts, affecting people throughout society.

Some automated systems have turned out to be unsafe or ineffective.

Sometimes, data is abused to limit people’s opportunities or track their activity.

In many cases, algorithms are plagued by bias — unchecked and unregulated — reinforcing society’s inequities.

Too often, these technologies are developed without sufficient regard for real-world consequences, and without the input of the people who will have to live with the results.

And even as these technologies influence important decisions in our everyday lives, people often don’t realize that automated systems are being used — let alone that they might be causing harm.

As we all know here, the criminal justice system is fraught with these problems.

You contend with these challenges regularly, and they are well-documented:

Here in Chicago, an algorithm reused previous arrest data to repeatedly send police to certain neighborhoods — often predominantly Black and Brown neighborhoods — even when those neighborhoods didn’t have the highest crime rates.

In Detroit, a public housing authority installed a facial recognition system at the entrances of housing complexes — intending to assist law enforcement, but leading to continuous surveillance without the community’s consent.

In New Orleans, a predictive policing system claimed to identify potential aggressors — but with no explanation of how the system reached its conclusions, leading to wrongful arrests.

Each of these instances weighs heavily on our hearts — and even heavier on the people, the families, and the communities who suffer as a result.

Disproportionately, those who experience the worst effects are Black people, Brown people, low-income folks. And these injustices aren’t just statistics — they are true stories of people’s real-life experiences.

These kinds of accounts have been shared with us time and time again over the course of the last nine months.

The urgent, powerful testimony that affirms beyond all reasonable doubt what the research has shown us for years: that injustices are being baked into these technologies every day — by negligence, or even by design — reproducing society’s deepest and oldest inequities.

We’ve heard it loud and clear, from people impacted by the criminal justice system to high school students to technologists at American companies: people want technologies that reinforce our highest values, not undermine them. People want guardrails built into automated systems to protect their rights.

Guardrails like: more rigorous system testing to protect people from discriminatory impacts of technologies; having meaningful human oversight over life-impacting decisions; consulting with the public before introducing new technologies into communities; and, for the criminal justice system, mechanisms assuring that an automated system will never subject people to conditions that may be different, less favorable, or more severe than what the law proscribes.

At the White House, we’re paying close attention to the algorithmic injustices that you see in your work every day.

We need a set of principles, and a set of practices — a framework that doesn’t distinguish between civil rights and technical specifications, but that recognizes they go hand-in-hand.

We need, in this era of algorithms, guidelines and expectations demanding that technology is rooted in justice, and that innovation happens without predation.

For example:

We should expect these systems to be safe and effective, not harmful and flawed.

We should expect to be protected from algorithmic discrimination, such that the use of automated systems does not place the American public at risk.

We should expect our privacy — and access to our data — to be respected in accordance with our wishes.

We should expect to know if an automated system is being used, and to know why it comes to its conclusions — to understand what’s going on inside that black box.

And, we should expect to be treated as people — and to receive an explanation or help from a person if need be.

These are basic, common-sense expectations. And they are things that we can all agree the American public deserves.

Don’t believe anyone who says this is too much to expect, too much to demand — because we can point to pioneering examples where guardrails have guided the development of automated systems, with safety and privacy in mind.

For example, Pennsylvania’s “Clean Slate” law created an automated computer process to seal some low-level criminal records, so those records don’t inhibit people from getting jobs or renting apartments.

And here in Illinois, the state’s Biometric Information Privacy Act has proven effective in holding tech companies to account.

We envision a future where examples like these are far more common than stories about automated systems that have violated people’s rights. A future that reflects our values.

We can draw the roadmap to get there. And since last October, the team at OSTP has been doing just that: developing an AI Bill of Rights.

We have done so, inspired by the original, two-century-old Bill of Rights — which you all know well — and its enumeration of rights, guarantees, and freedoms designed to protect the American public against the powerful government that our nation’s founders had just created.

Throughout America’s history, we’ve had to periodically reinterpret, reaffirm, renew, and expand these rights. And it’s clear that, in the 21st century, in this age of artificial intelligence, we need a “bill of rights” to guard against these powerful technologies we’ve recently created — to ensure that we design, develop, and use automated systems in ways that promote, reflect, and respect our democratic values.

I’m proud to say that we’ve not only focused on technology and its design — because that’s not all that matters. It also matters where automated systems are used. It matters how automated systems are used. And it matters who is monitoring — and whom is being monitored by — the automated systems too.

As I mentioned earlier, we’ve been developing this AI Bill of Rights through extensive consultation with the American public.

We intend for it to serve as a point of progress, and a path to the future — to help prevent the baking of human bias into automated systems, and to ensure that technologies designed, made, and used by people are accountable to people.

The White House Office of Science and Technology Policy will soon release this comprehensive framework for protecting people’s rights and freedoms in the age of AI.

It will be a blueprint for building technologies that are rooted in democratic values and that protect civil rights — laying out standards for these technologies, and providing concrete steps that designers, developers, governments and communities can take to actualize them.

Once we release this AI Bill of Rights, it’s going to take all of us, working together, to realize its vision.

As criminal defense attorneys, you play a key role in helping to move society closer to justice. And make no mistake: we’re going to need your help.

So what might you do, when automated systems show up in your cases? For instance:

One might request that the technology in question be tested in front of a jury, or subject to community consultation before being adopted.

One could ask if the software was investigated for bias or any other harms.

One could find an expert to publicly explain the design flaws in the automated system — or, if they can’t access the system, to explain all the flaws that might be there, if only they would open the black box.

Whatever strategy you pursue, you won’t be doing so alone.

From its founding, America has been a work in progress — aspiring to values, recognizing that this aim leaves some people behind, and working to fix that.

It’s also been a collective endeavor — a process we the people undertake together through electoral politics, through collective action and advocacy, through the courts, and through reaffirming our foundational ideals.

That’s what motivates the AI Bill of Rights, and our mission at the White House Office of Science and Technology Policy. And it’s what criminal defense lawyers like you help our country do every day. I’m so grateful, to each of you, that you do.

Thank you very much.

###

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top