The White House Office of Science and Technology Policy (OSTP) led a yearlong process to seek and distill input from people across the country – from impacted communities to industry stakeholders to technology developers to other experts across fields and sectors, as well as policymakers across the Federal government – on the issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listening sessions, private meetings, a formal request for information, and input to a publicly accessible and widely-publicized email address, people across the United States spoke up about both the promises and potential harms of these technologies, and played a central role in shaping the Blueprint for an AI Bill of Rights.

Panel Discussions to Inform the Blueprint for an AI Bill of Rights

OSTP co-hosted a series of six panel discussions in collaboration with the Center for American Progress, the Joint Center for Political and Economic Studies, New America, the German Marshall Fund, the Electronic Privacy Information Center, and the Mozilla Foundation. The purpose of these convenings – recordings of which are publicly available online[i] – was to bring together a variety of experts, practitioners, advocates and federal government officials to offer insights and analysis on the risks, harms, benefits, and policy opportunities of automated systems. Each panel discussion was organized around a wide-ranging theme, exploring current challenges and concerns and considering what an automated society that respects democratic values should look like. These discussions focused on the topics of consumer rights and protections, the criminal justice system, equal opportunities and civil justice, artificial intelligence and democratic values, social welfare and development, and the healthcare system.

Summaries of Panel Discussions:

  • Panel 1: Consumer Rights and Protections. This event explored the opportunities and challenges for individual consumers and communities in the context of a growing ecosystem of AI-enabled consumer products, advanced platforms and services, “Internet of Things” (IoT) devices, and smart city products and services.

    Moderator: Devin E. Willis, Attorney, Division of Privacy and Identity Protection, Bureau of Consumer Protection, Federal Trade Commission 

    Panelists:
  • Tamika L. Butler, Principal, Tamika L. Butler Consulting 
  • Jennifer Clark, Professor and Head of City and Regional Planning, Knowlton School of Engineering, Ohio State University 
  • Carl Holshouser, Senior Vice President for Operations and Strategic Initiatives, TechNet 
  • Surya Mattu, Senior Data Engineer and Investigative Data Journalist, The Markup 
  • Mariah Montgomery, National Campaign Director, Partnership for Working Families 

Panelists discussed the benefits of AI-enabled systems and their potential to build better and more innovative infrastructure. They individually noted that while AI technologies may be new, the process of technological diffusion is not, and that it was critical to have thoughtful and responsible development and integration of technology within communities. Some panelists suggested that the integration of technology could benefit from examining how technological diffusion has worked in the realm of urban planning: lessons learned from successes and failures there include the importance of balancing ownership rights, use rights, and community health, safety and welfare, as well ensuring better representation of all voices, especially those traditionally marginalized by technological advances. Some panelists also raised the issue of power structures – providing examples of how strong transparency requirements in smart city projects helped to reshape power and give more voice to those lacking the financial or political power to effect change.

In discussion of technical and governance interventions that that are needed to protect against the harms of these technologies, various panelists emphasized the need for transparency, data collection, and flexible and reactive policy development, analogous to how software is continuously updated and deployed. Some panelists pointed out that companies need clear guidelines to have a consistent environment for innovation, with principles and guardrails being the key to fostering responsible innovation.

Watch the full event here.

  • Panel 2: The Criminal Justice System. This event explored current and emergent uses of technology in the criminal justice system and considered how they advance or undermine public safety, justice, and democratic values.

    Moderator: Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity

    Panelists
    :
  • Sean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab
  • Kristian Lum, Researcher
  • Jumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers
  • Stanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University College of Medicine
  • Myaisha Hayes, Campaign Strategies Director, MediaJustice

Panelists discussed uses of technology within the criminal justice system, including the use of predictive policing, pretrial risk assessments, automated license plate readers, and prison communication tools. The discussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; such strategies might include data-driven approaches, but the focus on safety should be primary, and technology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised concerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of unjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated systems. Throughout, many of the panelists individually emphasized that the impact of these systems on individuals and communities is potentially severe: the systems lack individualization and work against the belief that people can change for the better, system use can lead to the loss of jobs and custody of children, and surveillance can lead to chilling effects for communities and sends negative signals to community members about how they’re viewed. 

In discussion of technical and governance interventions that that are needed to protect against the harms of these technologies, various panelists emphasized that transparency is important but is not enough to achieve accountability. Some panelists discussed their individual views on additional system needs for validity, and agreed upon the importance of advisory boards and compensated community input early in the design process (before the technology is built and instituted). Various panelists also emphasized the importance of regulation that includes limits to the type and cost of such technologies. 

Watch the full event here.

  • Panel 3: Equal Opportunities and Civil Justice. This event explored current and emerging uses of technology that impact equity of opportunity in employment, education, and housing.

    Moderator: Jenny Yang, Director, Office of Federal Contract Compliance Programs, Department of Labor

    Panelists:
  • Christo Wilson, Associate Professor of Computer Science, Northeastern University
  • Frida Polli, CEO, Pymetrics
  • Karen Levy, Assistant Professor, Department of Information Science, Cornell University
  • Natasha Duarte, Project Director, Upturn
  • Elana Zeide, Assistant Professor, University of Nebraska College of Law
  • Fabian Rogers, Constituent Advocate, Office of NY State Senator Jabari Brisport and Community Advocate and Floor Captain, Atlantic Plaza Towers Tenants Association

The individual panelists described the ways in which AI systems and other technologies are increasingly being used to limit access to equal opportunities in education, housing, and employment. Education-related concerning uses included the increased use of remote proctoring systems, student location and facial recognition tracking, teacher evaluation systems, robot teachers, and more. Housing-related concerning uses including automated tenant background screening and facial recognition-based controls to enter or exit housing complexes. Employment-related concerning uses included discrimination in automated hiring screening and workplace surveillance. Various panelists raised the limitations of existing privacy law as a key concern, pointing out that students should be able to reinvent themselves and require privacy of their student records and education-related data in order to do so. The overarching concerns of surveillance in these domains included concerns about the chilling effects of surveillance on student expression, inappropriate control of tenants via surveillance, and the way that surveillance of workers blurs the boundary between work and life and exerts extreme and potentially damaging control over workers’ lives. Additionally, some panelists pointed out ways that data from one situation was misapplied in another in a way that limited people’s opportunities, for example data from criminal justice settings or previous evictions being used to block further access to housing. Throughout, various panelists emphasized that these technologies are being used to shift the burden of oversight and efficiency from employers to workers, schools to students, and landlords to tenants, in ways that diminish and encroach on equality of opportunity; assessment of these technologies should include whether they are genuinely helpful in solving an identified problem. 

In discussion of technical and governance interventions that that are needed to protect against the harms of these technologies, panelists individually described the importance of: receiving community input into the design and use of technologies, public reporting on crucial elements of these systems, better notice and consent procedures that ensure privacy based on context and use case, ability to opt-out of using these systems and receive a fallback to a human process, providing explanations of decisions and how these systems work, the need for governance including training in using these systems, ensuring the technological use cases are genuinely related to the goal task and are locally validated to work, and the need for institution and protection of third party audits to ensure systems continue to be accountable and valid.

Watch the full event here.

  • Panel 4: Artificial Intelligence and Democratic Values. This event examined challenges and opportunities in the design of technology that can help support a democratic vision for AI. It included discussion of the technical aspects of designing non-discriminatory technology, explainable AI, human-computer interaction with an emphasis on community participation, and privacy-aware design.

Moderator: Kathy Pham , Deputy Chief Technology Officer for Product and Engineering, U.S Federal Trade Commission.

Panelists:

  • Liz O’Sullivan, CEO, Parity AI
  • Timnit Gebru, Independent Scholar
  • Jennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City
  • Pamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director, Sociotechnical Interaction Research (STIR) Lab
  • Seny Kamara, Associate Professor of Computer Science, Brown University

Each panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for biased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and understanding of the algorithmic systems.  The interventions and key needs various panelists put forward as necessary to the future design of critical AI systems included ongoing transparency, value sensitive and participatory design, explanations designed for relevant stakeholders, and public consultation.  Various panelists emphasized the importance of placing trust in people, not technologies, and in engaging with impacted communities to understand the potential harms of technologies and build protection by design into future systems.

Watch the full event here.

  • Panel 5: Social Welfare and Development. This event explored current and emerging uses of technology to implement or improve social welfare systems, social development programs, and other systems that can impact life chances.

Moderator: Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance Modernization, Office of the Secretary, Department of Labor

Panelists:

  • Blake Hall, CEO and Founder, ID.me
  • Karrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign
  • Christiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law’s Center for Human Rights and Global Justice
  • Julia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance
  • Dr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center
  • J. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and UWA Law School

Panelists separately described the increasing scope of technology use in providing for social welfare, including in fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost.  However, various panelists individually cautioned that these systems may reduce burden for government agencies by increasing the burden and agency of people using and interacting with these technologies.  Additionally, these systems can produce feedback loops and compounded harm, collecting data from communities and using it to reinforce inequality.  Various panelists suggested that these harms could be mitigated by ensuring community input at the beginning of the design process, providing ways to opt out of these systems and use associated human-driven mechanisms instead, ensuring timeliness of benefit payments, and providing clear notice about the use of these systems and clear explanations of how and what the technologies are doing.  Some panelists suggested that technology should be used to help people receive benefits, e.g., by pushing benefits to those in need and ensuring automated decision-making systems are only used to provide a positive outcome; technology shouldn’t be used to take supports away from people who need them. 

Watch the full event here.

  • Panel 6: The Healthcare System. This event explored current and emerging uses of technology in the healthcare system and consumer products related to health.

Moderator: Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of Health and Human Services.

Panelists:

  • Mark Schneider, Health Innovation Advisor, ChristianaCare
  • Ziad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management, University of California, Berkeley School of Public Health
  • Dorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and Sadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania
  • David Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University
  • Jamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for Health Equity

Panelists discussed the impact of new technologies on health disparities; healthcare access, delivery, and outcomes; and areas ripe for research and policymaking. Panelists discussed the increasing importance of technology as both a vehicle to deliver healthcare and a tool to enhance the quality of care. On the issue of delivery, various panelists pointed to a number of concerns including access to and expense of broadband service, the privacy concerns associated with telehealth systems, the expense associated with health monitoring devices, and how this can exacerbate equity issues.  On the issue of technology enhanced care, some panelists spoke extensively about the way in which racial biases and the use of race in medicine perpetuate harms and embed prior discrimination, and the importance of ensuring that the technologies used in medical care were accountable to the relevant stakeholders. Various panelists emphasized the importance of having the voices of those subjected to these technologies be heard.  

Watch the full event here.

Summary of Additional Engagements

  • OSTP created an email address (ai-equity@ostp.eop.gov) to solicit comments from the public on the use of artificial intelligence and other data-driven technologies in their lives.
  • OSTP issued a Request for Information (RFI) on the use and governance of biometric technologies.[ii] The purpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or planned use; the domains in which these technologies are being used; the entities making use of them; current principles, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their use or regulation. The 130 responses to this RFI are available in full online[iii] and were submitted by the below listed organizations and individuals:
    • Accenture
    • Access Now
    • ACT | The App Association
    • AHIP
    • AIethicist.org
    • Airlines for America
    • Alliance for Automotive Innovation
    • Amelia Winger-Bearskin
    • American Civil Liberties Union
    • American Civil Liberties Union of Massachusetts
    • American Medical Association
    • ARTICLE19
    • Attorneys General of the District of Columbia, Illinois, Maryland, Michigan, Minnesota, New York, North Carolina, Oregon, Vermont, and Washington
    • Avanade
    • Aware
    • Barbara Evans
    • Better Identity Coalition
    • Bipartisan Policy Center
    • Brandon L. Garrett and Cynthia Rudin
    • Brian Krupp
    • Brooklyn Defender Services
    • BSA | The Software Alliance
    • Carnegie Mellon University
    • Center for Democracy & Technology
    • Center for New Democratic Processes
    • Center for Research and Education on Accessible Technology and Experiences at University of Washington, Devva Kasnitz, L Jean Camp, Jonathan Lazar, Harry Hochheiser
    • Center on Privacy & Technology at Georgetown Law
    • Cisco Systems
    • City of Portland Smart City PDX Program
    • CLEAR
    • Clearview AI
    • Cognoa
    • Color of Change
    • Common Sense Media
    • Computing Community Consortium at Computing Research Association
    • Connected Health Initiative
    • Consumer Technology Association
    • Courtney Radsch
    • Coworker
    • Cyber Farm Labs
    • Data & Society Research Institute
    • Data for Black Lives
    • Data to Actionable Knowledge Lab at Harvard University
    • Deloitte
    • Dev Technology Group
    • Digital Therapeutics Alliance
    • Digital Welfare State & Human Rights Project and Center for Human Rights and Global Justice at New York University School of Law, and Temple University Institute for Law, Innovation & Technology
    • Dignari
    • Douglas Goddard
    • Edgar Dworsky
    • Electronic Frontier Foundation
    • Electronic Privacy Information Center, Center for Digital Democracy, and Consumer Federation of America
    • FaceTec
    • Fight for the Future
    • Ganesh Mani
    • Georgia Tech Research Institute
    • Google
    • Health Information Technology Research and Development Interagency Working Group
    • HireVue
    • HR Policy Association
    • ID.me
    • Identity and Data Sciences Laboratory at Science Applications International Corporation
    • Information Technology and Innovation Foundation
    • Information Technology Industry Council
    • Innocence Project
    • Institute for Human-Centered Artificial Intelligence at Stanford University
    • Integrated Justice Information Systems Institute
    • International Association of Chiefs of Police
    • International Biometrics + Identity Association
    • International Business Machines Corporation
    • International Committee of the Red Cross
    • Inventionphysics
    • iProov
    • Jacob Boudreau
    • Jennifer K. Wagner, Dan Berger, Margaret Hu, and Sara Katsanis
    • Jonathan Barry-Blocker
    • Joseph Turow
    • Joy Buolamwini
    • Joy Mack
    • Karen Bureau
    • Lamont Gholston
    • Lawyers’ Committee for Civil Rights Under Law
    • Lisa Feldman Barrett
    • Madeline Owens
    • Marsha Tudor
    • Microsoft Corporation
    • MITRE Corporation
    • National Association for the Advancement of Colored People Legal Defense and Educational Fund
    • National Association of Criminal Defense Lawyers
    • National Center for Missing & Exploited Children
    • National Fair Housing Alliance
    • National Immigration Law Center
    • NEC Corporation of America
    • New America’s Open Technology Institute
    • New York Civil Liberties Union
    • No Name Provided
    • Notre Dame Technology Ethics Center
    • Office of the Ohio Public Defender
    • Onfido
    • Oosto
    • Orissa Rose
    • Palantir
    • Pangiam
    • Parity Technologies
    • Patrick A. Stewart, Jeffrey K. Mullins, and Thomas J. Greitens
    • Pel Abbott
    • Philadelphia Unemployment Project
    • Project On Government Oversight
    • Recording Industry Association of America
    • Robert Wilkens
    • Ron Hedges
    • Science, Technology, and Public Policy Program at University of Michigan Ann Arbor
    • Security Industry Association
    • Sheila Dean
    • Software & Information Industry Association
    • Stephanie Dinkins and the Future Histories Studio at Stony Brook University
    • TechNet
    • The Alliance for Media Arts and Culture, MIT Open Documentary Lab and Co-Creation Studio, and Immerse
    • The International Brotherhood of Teamsters
    • The Leadership Conference on Civil and Human Rights
    • Thorn
    • U.S. Chamber of Commerce’s Technology Engagement Center
    • Uber Technologies
    • University of Pittsburgh Undergraduate Student Collaborative
    • Upturn
    • US Technology Policy Committee of the Association of Computing Machinery
    • Virginia Puccio
    • Visar Berisha and Julie Liss
    • XR Association
    • XR Safety Initiative
  • As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions for members of the public. The listening sessions together drew upwards of 300 participants. The Science and Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening sessions.[iv]
  • OSTP conducted meetings with a variety of stakeholders in the private sector and civil society. Some of these meetings were specifically focused on providing ideas related to the development of the Blueprint for an AI Bill of Rights while others provided useful general context on the positive use cases, potential harms, and/or oversight possibilities for these technologies. Participants in these conversations from the private sector and civil society included:
    • Adobe
    • American Civil Liberties Union (ACLU)
    • The Aspen Commission on Information Disorder
    • The Awood Center
    • The Australian Human Rights Commission
    • Biometrics Institute
    • The Brookings Institute
    • BSA | The Software Alliance
    • Cantellus Group
    • Center for American Progress
    • Center for Democracy and Technology
    • Center on Privacy and Technology at Georgetown Law
    • Christiana Care
    • Color of Change
    • Coworker
    • Data Robot
    • Data Trust Alliance
    • Data and Society Research Institute
    • Deepmind
    • EdSAFE AI Alliance
    • Electronic Privacy Information Center (EPIC)
    • Encode Justice
    • Equal AI
    • Google
    • Hitachi’s AI Policy Committee
    • The Innocence Project
    • Institute of Electrical and Electronics Engineers (IEEE)
    • Intuit
    • Lawyers Committee for Civil Rights Under Law
    • Legal Aid Society
    • The Leadership Conference on Civil and Human Rights
    • Meta
    • Microsoft
    • The MIT AI Policy Forum
    • Movement Alliance Project
    • The National Association of Criminal Defense Lawyers
    • O’Neil Risk Consulting & Algorithmic Auditing
    • The Partnership on AI
    • Pinterest
    • The Plaintext Group
    • pymetrics
    • SAP
    • The Security Industry Association
    • Software and Information Industry Association (SIIA)
    • Special Competitive Studies Project
    • Thorn
    • United for Respect
    • University of California at Berkeley Citris Policy Lab
    • University of California at Berkeley Labor Center
    • Unfinished/Project Liberty
    • Upturn
    • US Chamber of Commerce
    • US Chamber of Commerce Technology Engagement Center A.I. Working Group
    • Vibrent Health
    • Warehouse Worker Resource Center
    • Waymap

[i] White House Office of Science and Technology Policy. Join the Effort to Create A Bill of Rights for an Automated Society. Nov. 10, 2021. https://www.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an-automated-society/

[ii] White House Office of Science and Technology Policy. Notice of Request for Information (RFI) on Public and Private Sector Uses of Biometric Technologies. Issued Oct. 8, 2021. https://www.federalregister.gov/documents/2021/10/08/2021-21975/notice-of-request-for-information-rfi-on-public-and-private-sector-uses-of-biometric-technologies

[iii] National Artificial Intelligence Initiative Office. Public Input on Public and Private Sector Uses of Biometric Technologies. Accessed Apr. 19, 2022. https://www.ai.gov/86-fr-56300-responses/

[iv] Thomas D. Olszewski, Lisa M. Van Pay, Javier F. Ortiz, Sarah E. Swiersz, and Laurie A. Dacus. Synopsis of Responses to OSTP’s Request for Information on the Use and Governance of Biometric Technologies in the Public and Private Sectors. Science and Technology Policy Institute. Mar. 2022. https://www.ida.org/-/media/feature/publications/s/sy/synopsis-of-responses-to-request-for-information-on-the-use-and-governance-of-biometric-technologies/ida-document-d-33070.ashx

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top