Today, the Biden-Harris Administration’s Office of Science and Technology Policy released a Blueprint for a “Bill of Rights” to help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public. President Biden is standing up to special interests and has long said it is time to hold big technology companies accountable for the harms they cause and to ensure the American public is protected in an increasingly automated world. The framework builds on the Biden-Harris Administration’s work to hold big technology accountable, protect the civil rights of Americans, and ensure technology is working for the American people.
Automated technologies are increasingly used to make everyday decisions affecting people’s rights, opportunities, and access in everything from hiring and housing, to healthcare, education, and financial services. While these technologies can drive great innovations, like enabling early cancer detection or helping farmers grow food more efficiently, studies have shown how AI can display opportunities unequally or embed bias and discrimination in decision-making processes. As a result, automated systems can replicate or deepen inequalities already present in society against ordinary people, underscoring the need for greater transparency, accountability, and privacy.
The Blueprint for an AI Bill of Rights addresses these urgent challenges by laying out five core protections to which everyone in America should be entitled:
- Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Developed through extensive consultation with the American public, stakeholders, and U.S. government agencies, the Blueprint also includes concrete steps which governments, companies, communities, and others can take in order to build these key protections into policy, practice, or technological design to ensure automated systems work for the American people.
“Automated technologies are driving remarkable innovations and shaping important decisions that impact people’s rights, opportunities, and access. The Blueprint for an AI Bill of Rights is for everyone who interacts daily with these powerful technologies — and every person whose life has been altered by unaccountable algorithms,” said Office of Science and Technology Policy Deputy Director for Science and Society Dr. Alondra Nelson. “The practices laid out in the Blueprint for an AI Bill of Rights aren’t just aspirational; they are achievable and urgently necessary to build technologies and a society that works for all of us.”
“The Blueprint for an AI Bill of Rights and federal actions we are announcing today deliver on the President’s day one promise to support policies that advance equity and economic opportunity for the American people,” said White House Domestic Policy Advisor Susan Rice. “Taken together, these actions will help tackle algorithmic discrimination and address the harms of automated systems on underserved communities.”
Biases in automated systems span sectors and can threaten the rights of the American public. In recent years, these tools have been used to surveil workers in the workplace, in some cases restricting their ability to organize; monitor and falsely accuse students of cheating; wrongfully deny benefits to older Americans in need of health care; and arrest people for crimes they did not commit. Investigations have repeatedly found that big technology platforms, companies, and developers are deploying discriminatory algorithms and harming the public.
Today, the Biden-Harris Administration is also announcing actions across the Federal government that advance the Blueprint by protecting and supporting the American people—workers and employers, educators and students, patients and health care providers, veterans, renters and home owners, technologists, families, and communities:
- To protect worker’s rights, the Department of Labor has released “What the Blueprint for an AI Bill of Rights Means for Workers” and is ramping up enforcement of required surveillance reporting to protect worker organizing.
- To protect workers with disabilities, the Equal Employment Opportunity Commission (EEOC) and the Department of Justice released antidiscrimination technical assistance and guidance on the Americans with Disabilities Act (ADA) and employment algorithms in May 2022, and the Partnership on Employment & Accessible Technology, funded by the Department of Labor, has released the AI & Disability Inclusion Toolkit and the Equitable AI Playbook.
- To promote equal employment opportunity, the EEOC and the Department of Labor have launched a multi-year collaborative effort to reimagine hiring and recruitment practices, including in the use of automated systems.
- To protect consumers, the Federal Trade Commission (FTC) is exploring rules to curb commercial surveillance, algorithmic discrimination, and lax data security practices that could violate section 5 of the FTC Act.
- To protect consumers in the financial system, the Consumer Financial Protection Bureau (CFPB)confirmed that federal anti-discrimination law requires that creditors provide consumers with specific and accurate explanations when credit applications are denied or other adverse actions are taken, even if the creditor is relying on a black-box credit model using complex algorithms. CFPB is also cracking down on algorithmic discrimination in the financial sector and hiring technologists to fully staff this oversight work.
Protecting students and supporting educators:
- To guide schools in the use of AI, the Department of Education will release recommendations on the use of AI for teaching and learning by early 2023. These recommendations will: give educators, parents and caregivers, students, and communities tools to leverage AI to advance universal design for learning; define specifications for the safety, fairness, and efficacy of AI models used within education; and introduce guidelines and guardrails that build on existing education data privacy regulations as well as introduce new policies to support schools in protecting students when using AI.
Protecting patients and assisting health care providers:
- To protect patients from discrimination in health care, the Department of Health and Human Services has issued a proposed rule that includes a provision that would prohibit discrimination by algorithms used in clinical decision-making by covered health programs and activities, and will release an evidence-based examination of health care algorithms and racial and ethnic disparities for public comment by late 2022.
- To advance health equity, by the end of the 2022, the Department of Health and Human Services will release a vision for advancing Health Equity by Design that includes methods to reduce algorithmic discrimination in healthcare algorithms.
- To root out bias in health care provision, the Department of Health and Human Services requested information through multiple rulemaking processes on how Medicare policy can encourage software developers to prevent and mitigate bias in algorithms and predictive modeling.
- To protect veterans and support their health care, the Department of Veterans Affairs (VA) has instituted a Principle-Based Ethics Framework for Access to and Use of Veteran Data and launched the AI@VA Community and Network to pilot programs that will provide veterans with information about any AI system used in their healthcare and ensure AI risks are managed during human subjects research.
Ensuring fair access to housing:
- To protect renters, the Department of Housing and Urban Development will release guidance addressing the use of tenant screening algorithms in ways that may violate the Fair Housing Act.
- To protect home buyers and owners, Federal agencies that regulate mortgage financing will include a nondiscrimination quality control standard as part of a forthcoming proposed rule establishing quality control standards on automated valuation models so that these models do not rely upon biased data that could replicate past discrimination in housing.
Leading by example and advancing democratic values:
- To sustain American global leadership, the United States Agency for International Development (USAID) launched an AI Action Plan that commits USAID to embedding risk mitigation in AI programming and shaping a global Responsible AI agenda; USAID is also supporting the development, governance, and use of responsible, rights-respecting technology worldwide through the Advancing Digital Democracy initiative.
- To guide federal procurement, the Administration will work across agencies to develop new policies and guidance for using and buying AI products and services that are based on effective and promising practices to prevent and address bias and algorithmic discrimination resulting from the use of AI and other advanced technologies.
- To advance transparency and trust in the federal government, the Office of Management and Budget, the White House Office of Science and Technology Policy, and the Federal Chief Information Officers Council have coordinated across the government to publish inventories of non-classified and non-sensitive government AI use cases. Agencies are currently implementing their developed and approved plans to ensure their AI systems are consistent with laws and policies addressing civil rights, civil liberties, and privacy.
- To lead by example in following AI ethics principles, the Department of Energy (DOE) will be releasing Principles and Guidelines for Advancing Responsible and Trustworthy AI in Fall 2022 and DOE’s AI Risk Management Playbook suggests mitigations to proactively manage AI risks such as algorithmic discrimination. The Department of Defense operates under its AI Ethical Principles, and the associated Responsible AI Strategy & Implementation Pathway that assists the U.S. defense enterprise in upholding these principles. Similarly, the U.S. Intelligence Community operates under its Principles of AI Ethics and associated AI Ethics Framework.
Guiding and supporting technologists and entrepreneurs:
- To empower technologists to promote trustworthy AI, the Department of Commerce’s National Institute of Standards and Technology (NIST) is developing a risk management framework to help technologists incorporate considerations of fairness, safety, and privacy into the design, development, use, and evaluation of AI products, services, and systems. An online Playbook companion to the AI risk management framework will also provide users with recommended actions to operationalize these considerations.
- To shape the long-term future of trustworthy AI, with over $700 million in investments annually, the National Science Foundation continues to support AI research, including research into the fairness, security, safety, and trustworthiness of AI systems.
U.S. law and policy already provides a range of protections that can be applied to these technologies and the harms they enable. Where law or policy does not already provide guidance, the Blueprint should be used to inform policy-making to fill those gaps.